OpenAI
The@mutagent/openai package provides observeOpenAI(), a wrapper that adds automatic tracing to any OpenAI client instance. Using JavaScript Proxy, it intercepts all method calls without replacing the client — chat completions, embeddings, images, audio, moderations, and every other SDK method work exactly as before, with full tracing.
Installation
@mutagent/sdk >=0.1.0, openai >=4.0.0
Quick Start
Wrap your OpenAI client
Call
observeOpenAI() to wrap your existing client. All methods are preserved.Full Example
Streaming
Streaming is fully supported. The wrapper intercepts the async iterator to accumulate response text and closes the span when the stream completes.The stream wrapper preserves the original async iterator interface. Your existing streaming code works without changes.
Options
Pass options toobserveOpenAI() for session tracking and custom span naming:
| Option | Type | Description |
|---|---|---|
generationName | string | Custom span name prefix. Defaults to auto-detected method path (e.g., chat.completions.create) |
sessionId | string | Group related traces into a session |
userId | string | Attribute traces to a specific user |
tags | string[] | Tags for filtering in the dashboard |
What Gets Traced
The Proxy wrapper intercepts all method calls on the OpenAI client. Known methods get structured tracing:| Method | Span Kind | Data Captured |
|---|---|---|
chat.completions.create | llm.chat | Input messages, output messages, model, token usage |
completions.create | llm.completion | Input text, output text, model, token usage |
embeddings.create | llm.embedding | Input text, embedding dimensions, model, token usage |
| All other methods | llm.chat | Raw request/response data |
| Streaming calls | Same as above | Accumulated text, model |
| Errors | Any | Error message, status set to error |
Token Usage Tracking
For non-streaming calls, token metrics are automatically extracted from the OpenAI response:inputTokens—usage.prompt_tokensoutputTokens—usage.completion_tokenstotalTokens—usage.total_tokens
model and provider (always "openai") are recorded in span metrics.
For streaming calls, token usage is available when you set
stream_options: { include_usage: true } in the request parameters.How It Works
observeOpenAI() uses a JavaScript Proxy that:
- Intercepts property access on the OpenAI client via the
gettrap - Recursively wraps nested objects — accessing
client.chatreturns a new Proxy, soclient.chat.completions.create()is intercepted - Wraps function calls with
startSpan()/endSpan()from@mutagent/sdk/tracing - Handles streaming by wrapping
AsyncIterableresponses to accumulate text before closing the span
Migration from MutagentOpenAI
Before (deprecated):
CLI Shortcut
openai package in your package.json and generates ready-to-use configuration code.