Optimization Jobs
An optimization job runs multiple mutation-evaluation cycles to improve a prompt. This guide covers job configuration, lifecycle management, and result handling.Creating a Job
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
maxIterations | number | 10 | Maximum optimization cycles (1-100) |
targetScore | number | — | Stop early when this score is reached (0.0-1.0) |
patience | number | — | Stop after N iterations without improvement (1-50) |
model | string | — | LLM model to use for mutation and evaluation |
dryRun | boolean | false | Test mode with mock LLM calls |
tuningParams | object | — | Additional tuning parameters |
Configuration Examples
Conservative optimization:Job States
Jobs progress through these states:| State | Description | Transitions |
|---|---|---|
queued | Waiting to start | -> running, cancelled |
running | Actively optimizing | -> completed, paused, failed, cancelled |
paused | Temporarily stopped | -> running, cancelled |
completed | Successfully finished | Terminal |
failed | Error occurred | Terminal |
cancelled | Manually stopped | Terminal |
Managing Jobs
Check Status
List Jobs
Pause a Job
Temporarily stop a running job (can be resumed later):Pausing preserves the current best prompt and all progress. The job can be resumed from where it left off.
Resume a Job
Continue a paused job:Cancel a Job
Permanently stop a job (cannot be resumed):Getting Results
Retrieve results when a job completes:Job Response Structure
Applying Results
When optimization completes, it automatically creates a new prompt version with the optimized content. TheresultPromptId field points to this new version:
Monitoring Progress
Polling
Check status periodically via CLI or SDK:Streaming (Recommended)
Use WebSocket streaming for real-time updates. See Streaming for full details.Best Practices
Start with a quality dataset
Start with a quality dataset
Optimization is only as good as your test cases. Ensure your dataset is representative and well-designed before optimizing.
Set realistic targets
Set realistic targets
A target score of 1.0 is rarely achievable. Set targets based on your baseline and acceptable quality levels.
Use patience for early stopping
Use patience for early stopping
Set patience (e.g., 3-5) to avoid wasting iterations when the optimizer has converged.
Verify results
Verify results
After optimization completes, review the optimized prompt to ensure it maintains the intended behavior and variable structure.
Run multiple times
Run multiple times
Due to the stochastic nature of optimization, running multiple jobs and comparing results can yield better outcomes.
Troubleshooting
Job stuck in queued
Job stuck in queued
Check provider configuration and rate limits. Jobs queue when resources are constrained. Verify you have a configured provider in Settings > Providers.
No improvement after many iterations
No improvement after many iterations
The prompt may be near optimal for the given dataset. Try a different model, adjust the dataset, or review the evaluation criteria.
Trial limit exceeded
Trial limit exceeded
Free-tier workspaces have a limited number of optimization iteration-runs. The error message shows your usage and limit. Upgrade to increase your limit.
Job failed
Job failed
Check the error field in the job status. Common causes: provider API errors, invalid prompt variables, or dataset format issues.