Vercel AI SDK
The @mutagent/vercel-ai package provides two approaches for tracing Vercel AI SDK operations:
- OTel SpanExporter (Recommended) — Plugs into Vercel AI’s
experimental_telemetry via OpenTelemetry
- Middleware — Wraps model calls directly via
wrapLanguageModel
Installation
bun add @mutagent/vercel-ai @mutagent/sdk
Peer dependencies: @mutagent/sdk >=0.1.0, ai >=3.0.0
For the OTel approach, also install:
npm install @opentelemetry/sdk-trace-node @opentelemetry/sdk-trace-base
Option A: OTel SpanExporter (Recommended)
The MutagentSpanExporter plugs into the standard OpenTelemetry pipeline. Vercel AI SDK emits OTel spans when experimental_telemetry is enabled — the exporter receives these spans and forwards them to MutagenT.
This is the same pattern used by Langfuse, Braintrust, and Arize for Vercel AI integration.
Initialize tracing and set up OTel
import { initTracing } from '@mutagent/sdk/tracing';
import { MutagentSpanExporter } from '@mutagent/vercel-ai';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
// Initialize MutagenT tracing
initTracing({ apiKey: process.env.MUTAGENT_API_KEY! });
// Set up OTel with MutagenT exporter
const provider = new NodeTracerProvider();
provider.addSpanProcessor(
new SimpleSpanProcessor(new MutagentSpanExporter())
);
provider.register();
Enable telemetry on AI calls
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Hello!',
experimental_telemetry: { isEnabled: true },
});
Full Example with Streaming
import { initTracing } from '@mutagent/sdk/tracing';
import { MutagentSpanExporter } from '@mutagent/vercel-ai';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Setup (once at app startup)
initTracing({ apiKey: process.env.MUTAGENT_API_KEY! });
const provider = new NodeTracerProvider();
provider.addSpanProcessor(new SimpleSpanProcessor(new MutagentSpanExporter()));
provider.register();
// Stream with telemetry
const result = await streamText({
model: openai('gpt-4o'),
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain middleware patterns.' },
],
experimental_telemetry: { isEnabled: true },
});
for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}
What the SpanExporter Captures
The exporter automatically maps Vercel AI’s OTel attributes to MutagenT span primitives:
| Vercel AI Attribute | MutagenT Field |
|---|
gen_ai.request.model / ai.model.id | metrics.model |
gen_ai.system / ai.model.provider | metrics.provider |
gen_ai.usage.input_tokens | metrics.inputTokens |
gen_ai.usage.output_tokens | metrics.outputTokens |
gen_ai.operation.name | Span kind mapping |
Span events (gen_ai.content.prompt) | input |
Span events (gen_ai.content.completion) | output |
Option B: Middleware
The middleware approach wraps model calls directly. It works without OpenTelemetry dependencies.
Initialize tracing
import { initTracing } from '@mutagent/sdk/tracing';
initTracing({ apiKey: process.env.MUTAGENT_API_KEY! });
Create middleware and wrap model
import { createMutagentMiddleware } from '@mutagent/vercel-ai';
import { wrapLanguageModel, generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: createMutagentMiddleware(),
});
const { text } = await generateText({ model, prompt: 'Hello!' });
Middleware: What Gets Traced
| Hook | Span Kind | Data Captured |
|---|
wrapGenerate | llm.chat | Model ID, request params, response text, token usage |
wrapStream | llm.chat | Model ID, request params, accumulated text, tool calls, token usage |
The stream middleware uses a TransformStream to intercept chunks without affecting downstream consumers.
Next.js API Route Example
// app/api/chat/route.ts
import { initTracing } from '@mutagent/sdk/tracing';
import { MutagentSpanExporter } from '@mutagent/vercel-ai';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Setup (runs once per cold start)
initTracing({ apiKey: process.env.MUTAGENT_API_KEY! });
const provider = new NodeTracerProvider();
provider.addSpanProcessor(new SimpleSpanProcessor(new MutagentSpanExporter()));
provider.register();
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
experimental_telemetry: { isEnabled: true },
});
return result.toDataStreamResponse();
}
Edge Function Compatibility
The middleware approach uses standard Web APIs (TransformStream, ReadableStream) and works on Vercel Edge Functions, Cloudflare Workers, and other edge runtimes.The OTel SpanExporter approach requires Node.js APIs (@opentelemetry/sdk-trace-node) and is designed for Node.js server environments.
CLI Shortcut
mutagent integrate vercel-ai