Skip to main content

LangGraph Integration (Python)

Alpha (v0.1.0) — This package is implemented and tested but not yet published to PyPI. Install from source or wait for the public release. See the Python overview for setup instructions.
The mutagent-langgraph package provides a graph tracer that captures graph executions, node invocations, and edge transitions in LangGraph workflows.

Installation

Once published to PyPI:
pip install mutagent-langgraph
This installs mutagent-langgraph along with its dependencies: mutagent-tracing (core SDK) and langgraph (>= 0.0.20).

Quick Start

1

Initialize tracing

from mutagent_tracing import init_tracing

init_tracing(api_key="your-mutagent-api-key")
2

Create the graph tracer

from mutagent_langgraph import MutagentGraphTracer

tracer = MutagentGraphTracer()
3

Wrap graph execution with context managers

with tracer.trace_graph("my_workflow"):
    with tracer.trace_node("process"):
        result = do_work()

Full Example

import os
from mutagent_tracing import init_tracing, shutdown_tracing
from mutagent_langgraph import MutagentGraphTracer

# Initialize MutagenT tracing
init_tracing(
    api_key=os.environ["MUTAGENT_API_KEY"],
    environment="production",
)

tracer = MutagentGraphTracer()


def classify(text: str) -> str:
    """Classify input text."""
    return "positive" if "good" in text.lower() else "negative"


def respond(sentiment: str) -> str:
    """Generate a response based on sentiment."""
    if sentiment == "positive":
        return "Glad to hear that!"
    return "Sorry to hear that. How can I help?"


# Trace the full graph execution
with tracer.trace_graph("sentiment_pipeline", input_data={"text": "This is good!"}):
    with tracer.trace_node("classifier"):
        sentiment = classify("This is good!")

    tracer.trace_edge("classifier", "responder")

    with tracer.trace_node("responder"):
        response = respond(sentiment)

print(response)

# Flush remaining spans on exit
shutdown_tracing()

What Gets Traced

MutagentGraphTracer creates hierarchical spans that mirror your graph structure:

Graph Execution

Top-level span for the entire graph run

Node Execution

Individual spans for each node invocation

Edge Transitions

Instantaneous spans for edges between nodes

Span Hierarchy

Span Details

Span KindNameInputOutput
graphGraph name (e.g., sentiment_pipeline)Optional raw input dataOptional raw output data
nodeNode name (e.g., classifier)Optional raw input dataOptional raw output data
edge{from_node} -> {to_node}Optional condition

API Reference

Context Manager API

The recommended way to use the tracer is with Python context managers for automatic span lifecycle management:

tracer.trace_graph(name, input_data=None)

Context manager for tracing a graph execution. Returns the graph span on entry.
with tracer.trace_graph("my_graph", input_data={"key": "value"}) as graph_span:
    # graph_span is a MutagentSpan or None
    pass
ParameterTypeDescription
namestrGraph name/identifier
input_dataAny | NoneOptional input data to record

tracer.trace_node(name, input_data=None)

Context manager for tracing a node execution. Returns the node span on entry.
with tracer.trace_node("my_node", input_data={"key": "value"}) as node_span:
    # node_span is a MutagentSpan or None
    result = do_work()
ParameterTypeDescription
namestrNode name/identifier
input_dataAny | NoneOptional input data to record

tracer.trace_edge(from_node, to_node, condition=None)

Record an edge transition between nodes. Edge spans are instantaneous (started and ended immediately).
tracer.trace_edge("node_a", "node_b", condition="score > 0.8")
ParameterTypeDescription
from_nodestrSource node name
to_nodestrDestination node name
conditionstr | NoneOptional condition that triggered this edge

Imperative API

For cases where context managers are not suitable, use the imperative start/end methods:
# Graph level
execution_id = tracer.handle_graph_start("my_graph", input_data={"key": "value"})
# ... do work ...
tracer.handle_graph_end(execution_id, output_data={"result": "done"})

# Node level
node_id = tracer.handle_node_start("my_node", input_data={"key": "value"})
# ... do work ...
tracer.handle_node_end(node_id, output_data={"result": "done"})
The imperative API is useful when integrating with existing LangGraph event hooks or when the graph execution flow does not map cleanly to Python with blocks.

Conditional Routing

Track conditional edges to see which branches your graph takes:
with tracer.trace_graph("router_graph"):
    with tracer.trace_node("classifier"):
        category = classify(user_input)

    if category == "technical":
        tracer.trace_edge("classifier", "tech_support", condition="category == 'technical'")
        with tracer.trace_node("tech_support"):
            result = handle_tech(user_input)
    else:
        tracer.trace_edge("classifier", "general_support", condition="category != 'technical'")
        with tracer.trace_node("general_support"):
            result = handle_general(user_input)

Error Handling

Errors raised inside context managers are automatically captured. The span is recorded with ERROR status and the error message is preserved:
with tracer.trace_graph("my_graph"):
    with tracer.trace_node("risky_node"):
        raise ValueError("Something went wrong")
    # The node span and graph span both get ERROR status
For the imperative API, pass the error explicitly:
execution_id = tracer.handle_graph_start("my_graph")
try:
    node_id = tracer.handle_node_start("risky_node")
    raise ValueError("Something went wrong")
except Exception as e:
    tracer.handle_node_end(node_id, error=e)
    tracer.handle_graph_end(execution_id, error=e)

Combining with LangChain Callback Handler

For graphs that invoke LangChain LLMs internally, combine the graph tracer with the LangChain callback handler to get a unified trace:
from mutagent_tracing import init_tracing
from mutagent_langchain import MutagentCallbackHandler
from mutagent_langgraph import MutagentGraphTracer
from langchain_openai import ChatOpenAI

init_tracing(api_key="your-mutagent-api-key")

tracer = MutagentGraphTracer()
handler = MutagentCallbackHandler()
llm = ChatOpenAI(model="gpt-4", callbacks=[handler])

with tracer.trace_graph("qa_pipeline"):
    with tracer.trace_node("generate"):
        # LLM call is nested under the "generate" node span
        response = llm.invoke("What is MutagenT?")

    tracer.trace_edge("generate", "validate")

    with tracer.trace_node("validate"):
        is_valid = len(response.content) > 0

TypeScript Equivalent

For the TypeScript/Node.js LangGraph integration, see the LangGraph (TypeScript) guide.