Documentation Index
Fetch the complete documentation index at: https://docs.mutagent.io/llms.txt
Use this file to discover all available pages before exploring further.
Optimization SDK
Start and manage automated prompt optimization jobs. The optimization engine iteratively mutates prompt text, evaluates candidates against a dataset, and converges on the highest-scoring variant.
Start Optimization
Create and start an optimization job for a prompt. The id_ parameter is the numeric prompt ID:
from mutagent import Mutagent
from mutagent.models import DatasetIdEvaluationIdConfig, MaxIterationsTargetScorePatience
with Mutagent() as client:
job = client.optimization.optimize_prompt(
id_=42, # Prompt ID
body=DatasetIdEvaluationIdConfig(
dataset_id=7,
config=MaxIterationsTargetScorePatience(
max_iterations=10,
target_score=0.9,
patience=3,
model="claude-sonnet-4-6",
evaluation_model="claude-sonnet-4-6",
),
),
)
print("Job started:", job["id"])
mutagent-sdk-python/src/mutagent/optimization.py — Optimization.optimize_prompt
mutagent-sdk-python/src/mutagent/models/dataset_id_evaluation_id_config.py — DatasetIdEvaluationIdConfig
mutagent-sdk-python/src/mutagent/models/max_iterations_target_score_patience.py — MaxIterationsTargetScorePatience
DatasetIdEvaluationIdConfig fields
| Field | Type | Required | Description |
|---|
dataset_id | int | Yes | Dataset ID to evaluate against |
config | MaxIterationsTargetScorePatience | Yes | Optimization configuration |
evaluation_id | int | No | Specific evaluation definition to use |
execution_mode | str | No | Execution mode |
MaxIterationsTargetScorePatience fields (config object)
| Field | Type | Required | Description |
|---|
max_iterations | float | Yes | Maximum optimization cycles (1-100) |
target_score | float | No | Stop early when this score is reached (0-1) |
patience | float | No | Stop after N iterations with no improvement |
dry_run | bool | No | Validate configuration without starting execution |
model | str | No | Target LLM model for prompt generation |
execution_model | str | No | Model used for executing prompts during evaluation |
optimization_model | str | No | Model used for generating prompt mutations |
evaluation_model | str | No | Model used for scoring outputs |
Get Job Status
with Mutagent() as client:
status = client.optimization.get_optimization(id_="job-uuid-here")
print("Status:", status)
List Optimization Jobs
with Mutagent() as client:
result = client.optimization.list_optimizations(
status="running",
limit=20,
offset=0,
)
for job in result.get("data", []):
print(job["id"], "-", job["status"])
Filter parameters: prompt_group_id, status, limit, offset
Get Score Progression
Retrieve the score history across iterations:
with Mutagent() as client:
progress = client.optimization.get_optimization_progress(id_="job-uuid-here")
print("Progression:", progress)
Get State Snapshots
with Mutagent() as client:
states = client.optimization.get_optimization_states(id_="job-uuid-here")
Get Results
with Mutagent() as client:
results = client.optimization.get_optimization_results(id_="job-uuid-here")
Pause Job
with Mutagent() as client:
client.optimization.pause_optimization(id_="job-uuid-here")
print("Job paused")
Resume Job
with Mutagent() as client:
client.optimization.resume_optimization(id_="job-uuid-here")
print("Job resumed")
Cancel Job
with Mutagent() as client:
client.optimization.cancel_optimization(id_="job-uuid-here")
print("Job cancelled")
Poll for Completion
import time
from mutagent import Mutagent
def wait_for_optimization(job_id: str) -> dict:
with Mutagent() as client:
while True:
status = client.optimization.get_optimization(id_=job_id)
status_str = str(status)
print(f"Job status: {status_str[:100]}...")
if "completed" in status_str:
print("Optimization complete")
return status
if "failed" in status_str or "cancelled" in status_str:
raise RuntimeError("Job ended without completing")
time.sleep(5)
result = wait_for_optimization("job-uuid-here")
Async version
import asyncio
from mutagent import AsyncMutagent
async def wait_for_optimization_async(job_id: str) -> dict:
async with AsyncMutagent() as client:
while True:
status = await client.optimization.get_optimization(id_=job_id)
status_str = str(status)
if "completed" in status_str:
return status
if "failed" in status_str or "cancelled" in status_str:
raise RuntimeError("Job ended without completing")
await asyncio.sleep(5)
Method Reference
| Method | Description | Namespace |
|---|
optimize_prompt(id_, body) | Start optimization job | client.optimization |
get_optimization(id_) | Get job status | client.optimization |
list_optimizations(...) | List all jobs with filters | client.optimization |
get_optimization_progress(id_) | Get score progression | client.optimization |
get_optimization_states(id_) | Get state snapshots per iteration | client.optimization |
get_optimization_results(id_) | Get results with scorecard | client.optimization |
pause_optimization(id_) | Pause job | client.optimization |
resume_optimization(id_) | Resume job | client.optimization |
cancel_optimization(id_) | Cancel job | client.optimization |