Skip to main content

Vercel AI SDK

The @mutagent/vercel-ai package provides two approaches for tracing Vercel AI SDK operations:
  1. OTel SpanExporter (Recommended) — Plugs into Vercel AI’s experimental_telemetry via OpenTelemetry
  2. Middleware — Wraps model calls directly via wrapLanguageModel

Installation

bun add @mutagent/vercel-ai @mutagent/sdk
Peer dependencies: @mutagent/sdk >=0.1.0, ai >=3.0.0 For the OTel approach, also install:
npm install @opentelemetry/sdk-trace-node @opentelemetry/sdk-trace-base
The MutagentSpanExporter plugs into the standard OpenTelemetry pipeline. Vercel AI SDK emits OTel spans when experimental_telemetry is enabled — the exporter receives these spans and forwards them to MutagenT. This is the same pattern used by Langfuse, Braintrust, and Arize for Vercel AI integration.
1

Initialize tracing and set up OTel

import { initTracing } from '@mutagent/sdk/tracing';
import { MutagentSpanExporter } from '@mutagent/vercel-ai';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';

// Initialize MutagenT tracing
initTracing({ apiKey: process.env.MUTAGENT_API_KEY! });

// Set up OTel with MutagenT exporter
const provider = new NodeTracerProvider();
provider.addSpanProcessor(
  new SimpleSpanProcessor(new MutagentSpanExporter())
);
provider.register();
2

Enable telemetry on AI calls

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Hello!',
  experimental_telemetry: { isEnabled: true },
});

Full Example with Streaming

import { initTracing } from '@mutagent/sdk/tracing';
import { MutagentSpanExporter } from '@mutagent/vercel-ai';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

// Setup (once at app startup)
initTracing({ apiKey: process.env.MUTAGENT_API_KEY! });
const provider = new NodeTracerProvider();
provider.addSpanProcessor(new SimpleSpanProcessor(new MutagentSpanExporter()));
provider.register();

// Stream with telemetry
const result = await streamText({
  model: openai('gpt-4o'),
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain middleware patterns.' },
  ],
  experimental_telemetry: { isEnabled: true },
});

for await (const textPart of result.textStream) {
  process.stdout.write(textPart);
}

What the SpanExporter Captures

The exporter automatically maps Vercel AI’s OTel attributes to MutagenT span primitives:
Vercel AI AttributeMutagenT Field
gen_ai.request.model / ai.model.idmetrics.model
gen_ai.system / ai.model.providermetrics.provider
gen_ai.usage.input_tokensmetrics.inputTokens
gen_ai.usage.output_tokensmetrics.outputTokens
gen_ai.operation.nameSpan kind mapping
Span events (gen_ai.content.prompt)input
Span events (gen_ai.content.completion)output

Option B: Middleware

The middleware approach wraps model calls directly. It works without OpenTelemetry dependencies.
1

Initialize tracing

import { initTracing } from '@mutagent/sdk/tracing';

initTracing({ apiKey: process.env.MUTAGENT_API_KEY! });
2

Create middleware and wrap model

import { createMutagentMiddleware } from '@mutagent/vercel-ai';
import { wrapLanguageModel, generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const model = wrapLanguageModel({
  model: openai('gpt-4o'),
  middleware: createMutagentMiddleware(),
});

const { text } = await generateText({ model, prompt: 'Hello!' });

Middleware: What Gets Traced

HookSpan KindData Captured
wrapGeneratellm.chatModel ID, request params, response text, token usage
wrapStreamllm.chatModel ID, request params, accumulated text, tool calls, token usage
The stream middleware uses a TransformStream to intercept chunks without affecting downstream consumers.

Next.js API Route Example

// app/api/chat/route.ts
import { initTracing } from '@mutagent/sdk/tracing';
import { MutagentSpanExporter } from '@mutagent/vercel-ai';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

// Setup (runs once per cold start)
initTracing({ apiKey: process.env.MUTAGENT_API_KEY! });
const provider = new NodeTracerProvider();
provider.addSpanProcessor(new SimpleSpanProcessor(new MutagentSpanExporter()));
provider.register();

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    messages,
    experimental_telemetry: { isEnabled: true },
  });

  return result.toDataStreamResponse();
}

Edge Function Compatibility

The middleware approach uses standard Web APIs (TransformStream, ReadableStream) and works on Vercel Edge Functions, Cloudflare Workers, and other edge runtimes.The OTel SpanExporter approach requires Node.js APIs (@opentelemetry/sdk-trace-node) and is designed for Node.js server environments.

CLI Shortcut

mutagent integrate vercel-ai