Skip to main content

Integrations

MutagenT integrations allow you to send traces from your LLM applications to the MutagenT platform for observability, analysis, and optimization.

How Integrations Work

Integrations use callback handlers that:
  1. Intercept LLM calls in your application
  2. Capture request/response data, timing, token usage
  3. Send traces to MutagenT asynchronously
  4. Don’t impact your application’s performance

Generate Integration Code

The easiest way to integrate is using the CLI:
# Interactive - auto-detects your framework
mutagent integrate

# Direct framework selection
mutagent integrate mastra
mutagent integrate langchain
mutagent integrate vercel-ai
See CLI Integrations for full details.

Supported Frameworks

FrameworkPackageStatus
Mastra@mutagent/mastraAvailable
LangChain@mutagent/langchainAvailable
LangGraph@mutagent/langgraphAvailable
Vercel AI@mutagent/vercel-aiAvailable
GenericDirect APIAvailable

What Gets Tracked

LLM Calls

Input prompts, outputs, model used

Token Usage

Input/output tokens, costs

Latency

Request duration, time to first token

Errors

Failures, rate limits, timeouts

Integration Pattern

All MutagenT integrations follow a similar callback handler pattern:
import { MutagentCallbackHandler } from '@mutagent/langchain';

// Create callback handler
const handler = new MutagentCallbackHandler({
  apiKey: process.env.MUTAGENT_API_KEY,
  // Optional: link traces to specific prompts
  promptId: 123,
});

// Add to your LLM framework
const chain = new ConversationChain({
  llm: new ChatOpenAI(),
  callbacks: [handler],
});

// Traces automatically sent to MutagenT
const response = await chain.invoke({ input: 'Hello!' });

Next Steps