LangChain Integration (Python)
Alpha (v0.1.0) — This package is implemented and tested but not yet published to PyPI . Install from source or wait for the public release. See the Python overview for setup instructions.
The mutagent-langchain package provides a callback handler that automatically traces LangChain LLM calls, chains, tools, and retrievers.
Installation
Once published to PyPI:
pip install mutagent-langchain
This installs mutagent-langchain along with its dependencies: mutagent-tracing (core SDK) and langchain-core (>= 0.1.0).
Quick Start
Initialize tracing
from mutagent_tracing import init_tracing
init_tracing( api_key = "your-mutagent-api-key" )
Create the callback handler
from mutagent_langchain import MutagentCallbackHandler
handler = MutagentCallbackHandler()
Pass the handler to your LangChain components
from langchain_openai import ChatOpenAI
llm = ChatOpenAI( model = "gpt-4" , callbacks = [handler])
response = llm.invoke( "What is the meaning of life?" )
Full Example
import os
from mutagent_tracing import init_tracing, shutdown_tracing
from mutagent_langchain import MutagentCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Initialize MutagenT tracing
init_tracing(
api_key = os.environ[ "MUTAGENT_API_KEY" ],
environment = "production" ,
)
# Create the callback handler
handler = MutagentCallbackHandler()
# Build a LangChain chain
prompt = ChatPromptTemplate.from_messages([
( "system" , "You are a helpful assistant." ),
( "user" , " {input} " ),
])
llm = ChatOpenAI( model = "gpt-4" , callbacks = [handler])
chain = prompt | llm | StrOutputParser()
# Invoke with callbacks - all steps are traced
response = chain.invoke(
{ "input" : "Explain quantum computing in simple terms." },
config = { "callbacks" : [handler]},
)
print (response)
# Flush remaining spans on exit
shutdown_tracing()
What Gets Traced
MutagentCallbackHandler automatically captures spans for each LangChain event type:
LLM Calls Chat model and LLM invocations with input messages, output text, and token usage
Chains Chain execution with raw inputs and outputs
Tools Tool invocations with input arguments and results
Retrievers RAG retrieval operations with queries and returned documents
Span Mapping
LangChain Event Span Kind Captured Data on_chat_model_startllm.chatInput messages (system, user, assistant) on_llm_startllm.chatInput prompts as text on_llm_endllm.chatOutput text, token usage (input/output/total) on_chain_startchainRaw chain inputs on_chain_endchainRaw chain outputs on_tool_starttoolTool name and input string on_tool_endtoolTool output string on_retriever_startretrievalQuery text on_retriever_endretrievalRetrieved documents with content and metadata
Token Usage
Token metrics are automatically extracted from the LLM response when available:
Metric Description input_tokensNumber of prompt tokens output_tokensNumber of completion tokens total_tokensTotal token count
Usage with Agents
The callback handler traces the full agent execution lifecycle, including reasoning steps and tool calls:
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from mutagent_langchain import MutagentCallbackHandler
from mutagent_tracing import init_tracing
init_tracing( api_key = "your-mutagent-api-key" )
handler = MutagentCallbackHandler()
@tool
def get_weather ( city : str ) -> str :
"""Get the weather for a city."""
return f "The weather in { city } is sunny, 72F."
llm = ChatOpenAI( model = "gpt-4" , callbacks = [handler])
prompt = ChatPromptTemplate.from_messages([
( "system" , "You are a helpful assistant." ),
MessagesPlaceholder( "chat_history" , optional = True ),
( "user" , " {input} " ),
MessagesPlaceholder( "agent_scratchpad" ),
])
agent = create_openai_functions_agent(llm, [get_weather], prompt)
executor = AgentExecutor( agent = agent, tools = [get_weather], callbacks = [handler])
result = executor.invoke({ "input" : "What's the weather in Paris?" })
print (result[ "output" ])
Usage with Retrievers (RAG)
Retriever operations are automatically traced with queries and returned documents:
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from mutagent_langchain import MutagentCallbackHandler
from mutagent_tracing import init_tracing
init_tracing( api_key = "your-mutagent-api-key" )
handler = MutagentCallbackHandler()
# Set up retriever
vectorstore = FAISS .from_texts(
[ "MutagenT is an AI observability platform." ],
embedding = OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
# Build RAG chain
prompt = ChatPromptTemplate.from_template(
"Answer based on context: {context} \n\n Question: {question} "
)
llm = ChatOpenAI( model = "gpt-4" , callbacks = [handler])
chain = (
{ "context" : retriever, "question" : RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
result = chain.invoke( "What is MutagenT?" , config = { "callbacks" : [handler]})
print (result)
Error Handling
Errors in any LangChain component are automatically captured with ERROR status and the error message:
try :
response = llm.invoke( "Hello!" , config = { "callbacks" : [handler]})
except Exception as e:
# Span is already recorded with status=ERROR and the error message
print ( f "Error: { e } " )
Parent-Child Relationships
The handler automatically tracks parent-child relationships using LangChain’s run_id and parent_run_id. When a chain invokes an LLM, the LLM span is nested under the chain span in MutagenT’s trace viewer.
Pass the same MutagentCallbackHandler instance to all components in your pipeline to get a complete trace tree with proper parent-child nesting.
TypeScript Equivalent
For the TypeScript/Node.js LangChain integration, see the LangChain (TypeScript) guide.