Test prompt variants across LLM providers with LLM-as-judge evaluation.
pip install llm-prompt-labOr with pipx for isolated installs:
pipx install llm-prompt-labSet your provider API keys as environment variables:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...Alternatively, create a .env file in your working directory — prompt-lab loads it automatically:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
Only the keys for providers you use are required.
You can also use custom environment variable names per provider — see Custom API Key References.
uv sync# Create a new experiment
prompt-lab new
prompt-lab new --config spec.yaml
# Run an experiment
prompt-lab run experiments/my-experiment
# Run a single variant
prompt-lab run experiments/my-experiment/v1
# View results
prompt-lab results experiments/my-experiment/v1
# Compare variants
prompt-lab compare experiments/my-experimentsystem.md (optional) + prompt.md + inputs.yaml → LLM → response → judge.md → score
- prompt.md is the user message sent to the LLM (the prompt you want to evaluate)
- system.md (optional) is the system message — persona, instructions, constraints
- inputs.yaml provides test cases with variables for both files
- The messages are sent to each configured model (LLM)
- judge.md evaluates each response and assigns a score
Create multiple variants (v1, v2, etc.) to compare different prompt approaches.
my-experiment/
├── experiment.md # Config: models, runs (required)
├── judge.md # Evaluator: scoring criteria (required)
├── inputs.yaml # Shared test cases (optional, used by all variants)
├── v1/ # Variant (at least one required)
│ ├── prompt.md # User message (required)
│ ├── system.md # System message (optional)
│ └── tools.yaml # Tool definitions (optional)
└── v2/ # Another variant to compare...
├── prompt.md # Different prompt approach
└── system.md # Different system instructions
Both judge.md and inputs.yaml support fallback: if not found in the variant folder, the experiment-level file is used. This allows sharing test cases across variants for fair A/B comparison.
Defines the experiment name, description, models, and default number of runs per input.
---
name: my-experiment
description: Testing different prompt styles
hypothesis: Concise prompts will score higher than verbose ones
models:
- openai:gpt-4o-mini
- anthropic:claude-sonnet-4-20250514
runs: 5
---
Optional markdown content describing the experiment.Experiment options:
| Option | Default | Description |
|---|---|---|
name |
folder name | Experiment identifier |
description |
"" |
Brief description |
models |
required | List of models to test |
runs |
5 |
Runs per input (for statistical analysis) |
hypothesis |
"" |
What you're testing (displayed in results) |
key_refs |
{} |
Custom env var names per provider (see Custom API Key References) |
The user message sent to the LLM. Use {{ variables }} to inject test data from inputs.yaml.
Generate 5 creative product names.
Product description: {{ description }}
Seed words: {{ seeds }}
Product names:Each variant folder contains a different prompt.md to compare approaches (e.g., zero-shot vs few-shot, formal vs casual tone, etc.).
For hardcoded prompts without variables, just write the prompt directly:
Tell me a joke about programming.If present, becomes the system message for the LLM call. Uses the same {{ variables }} from inputs.yaml.
You are a helpful assistant. You can check the weather using the get_weather tool when users ask about weather conditions.
Only use the weather tool when the user is actually asking about weather. For other questions, just respond normally without using any tools.When system.md is absent, the prompt is sent as the user message with no system message.
When to use system.md:
- Setting a persona or role for the LLM
- Providing instructions that frame behavior (e.g., tool usage rules)
- Separating "what the model is" from "what the user asks"
Test cases with variables matching the prompt and system templates. If omitted, runs once with empty data (useful for static prompts without variables).
- id: alice
name: Alice
- id: bob
name: Bob
runs: 10 # Override experiment's runs for this inputEach input case can have any number of fields. All fields (except id and runs) are available as {{ variables }} in both prompt.md and system.md.
Location: Can be placed at experiment level (shared across all variants) or in a variant folder (variant-specific). Variant-level inputs take precedence over experiment-level.
Input options:
| Field | Default | Description |
|---|---|---|
id |
input-N |
Unique identifier for results |
runs |
experiment runs | Override runs for this specific input |
| (other) | - | Variables available in prompt and system templates |
Define tools (functions) that the LLM can call during execution. Useful for testing prompts that involve function calling.
- name: get_weather
description: Get current weather for a location
parameters:
type: object
properties:
location:
type: string
description: City name
unit:
type: string
enum: [celsius, fahrenheit]
required:
- location
- name: search
description: Search the web
parameters:
type: object
properties:
query:
type: stringTool fields:
| Field | Required | Description |
|---|---|---|
name |
yes | Tool/function name |
description |
no | What the tool does |
parameters |
no | JSON Schema for tool parameters |
Tool calls made by the model are captured in the response and available for judge evaluation.
Defines how to score each LLM response. The judge is another LLM that evaluates quality based on your criteria.
---
model: openai:gpt-4o-mini
score_range: [1, 10]
temperature: 0
---
You are evaluating a greeting response.
## Rubric
- **10**: Uses user's name, warm tone, offers to help
- **8-9**: Uses name and friendly, but generic
- **6-7**: Friendly but doesn't use name
- **4-5**: Cold or overly formal
- **1-3**: Inappropriate or ignores user
**Prompt:** {{ prompt }}
**Model Response:** {{ response }}Judge options:
| Option | Default | Description |
|---|---|---|
model |
openai:gpt-4o |
Model to use for judging (single judge) |
models |
- | List of models for multi-judge (opt-in, see below) |
aggregation |
mean |
Score aggregation: mean or median (multi-judge only) |
score_range |
[1, 10] |
Min and max score |
temperature |
0 |
0 = deterministic, higher = more varied |
chain_of_thought |
true |
Step-by-step reasoning before scoring (disable with false) |
Use multiple LLM models as judges to reduce self-enhancement bias (when a model scores itself favorably). Scores are aggregated using mean or median.
---
models:
- openai:gpt-4o-mini
- anthropic:claude-sonnet-4-20250514
aggregation: mean
score_range: [1, 10]
---
## Rubric
...When to use multi-judge:
- Testing responses from GPT models? Add Claude as a judge (and vice versa)
- Need more reliable scores? Multiple perspectives reduce bias
- High-stakes evaluations where accuracy matters
Trade-offs:
- Requires API keys for multiple providers
- 2x API costs for judging
- Slightly slower execution
Note: Use model: (singular) for single judge, models: (plural) for multi-judge.
By default, the judge analyzes responses step-by-step before scoring. This improves alignment with human judgment by reducing anchoring bias.
To disable Chain-of-Thought (for faster/cheaper evaluations):
---
model: openai:gpt-4o-mini
score_range: [1, 10]
chain_of_thought: false
---
## Rubric
...When enabled, the judge will:
- Review each rubric criterion
- Analyze how the response meets each criterion
- Identify strengths and weaknesses
- Only then provide the final score
For more reliable evaluation, run each input multiple times and get statistical analysis:
# experiment.md
---
name: my-experiment
models:
- openai:gpt-4o-mini
runs: 5
---Results show hypothesis and mean with 95% confidence interval:
Hypothesis: Concise prompts will score higher than verbose ones
┏━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Input ┃ Model ┃ Mean ┃ 95% CI ┃ Range ┃ Scores ┃
┡━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━┩
│ alice │ openai:gpt-4o-mini │ 9.2 │ (8.5-9.9) │ 8-10 │ 9, 10, 9, 9, 9│
│ bob │ openai:gpt-4o-mini │ 8.4 │ (7.8-9.0) │ 8-9 │ 8, 9, 8, 8, 9 │
└───────┴────────────────────┴──────┴──────────────┴───────┴───────────────┘
⚠ Low sample size (3 runs). Consider runs: 5+ for reliable statistics.
When runs > 1:
- Cache is disabled to get independent LLM responses
- Each input is evaluated N times
- 95% confidence intervals show the reliability of your results
- Warning shown when sample size is too small for reliable statistics
When comparing variants, the compare command shows whether differences are statistically significant:
prompt-lab compare experiments/my-experimentHypothesis: Concise prompts will score higher than verbose ones
┏━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━┓
┃ Variant ┃ Mean Score ┃ 95% CI ┃ Avg Latency ┃ Runs ┃
┡━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━┩
│ v1 │ 8.5/10 │ (8.1-8.9) │ 450ms │ 2×5 │
│ v2 │ 7.2/10 │ (6.8-7.6) │ 420ms │ 2×5 │
└─────────┴────────────┴──────────────┴─────────────┴──────┘
Statistical Significance (Welch's t-test, α=0.05):
✓ v1 > v2 (p≤0.01)
This helps you know if v1 is actually better than v2, or if the difference is just noise.
Variables from inputs.yaml are available in both prompt.md and system.md:
Hello {{ name }}, you are {{ age }} years old.Literal braces don't need escaping:
Return JSON: {"result": "value"}Create a new experiment. Runs an interactive wizard or reads from a config file.
# Interactive wizard
prompt-lab new
# From config file
prompt-lab new --config spec.yamlOptions:
| Option | Short | Description |
|---|---|---|
--config |
-c |
Path to experiment spec YAML |
Run a prompt experiment. Auto-detects scope from path.
# Run all variants
prompt-lab run experiments/my-experiment
# Run single variant
prompt-lab run experiments/my-experiment/v1
# Run specific model only
prompt-lab run experiments/my-experiment/v1 --model openai:gpt-4o-mini
# Skip cache (fresh API calls)
prompt-lab run experiments/my-experiment/v1 --no-cache
# Hide progress bar
prompt-lab run experiments/my-experiment -qOptions:
| Option | Short | Description |
|---|---|---|
--model |
-m |
Run only this model |
--no-cache |
Disable response caching | |
--quiet |
-q |
Hide progress bar |
--key-ref |
-k |
Custom env var for provider API key (format: provider:ENV_VAR) |
Show results table for a variant.
prompt-lab results experiments/my-experiment/v1
# Show specific run
prompt-lab results experiments/my-experiment/v1 --run 2026-01-25T19-30-00Compare results across all variants.
prompt-lab compare experiments/my-experimentShow detailed responses with judge reasoning.
# Show all responses
prompt-lab show experiments/my-experiment/v1
# Filter by input
prompt-lab show experiments/my-experiment/v1 --input alice
# Filter by model
prompt-lab show experiments/my-experiment/v1 --model openai:gpt-4o-mini
# Combine filters
prompt-lab show experiments/my-experiment/v1 --input alice --model openai:gpt-4o-miniClean experiment results. Auto-detects scope from path.
# Clean single variant results
prompt-lab clean experiments/my-experiment/v1
# Clean all variants (auto-detected from experiment path)
prompt-lab clean experiments/my-experiment
# Skip confirmation
prompt-lab clean experiments/my-experiment --yesOptions:
| Option | Short | Description |
|---|---|---|
--yes |
-y |
Skip confirmation prompt |
Manage response cache.
# Clear all cached responses
prompt-lab cache clear| Provider | Model format |
|---|---|
| OpenAI | openai:gpt-4o, openai:gpt-4o-mini |
| Anthropic | anthropic:claude-sonnet-4-20250514 |
By default, prompt-lab reads OPENAI_API_KEY and ANTHROPIC_API_KEY from the environment. You can override these per provider using custom environment variable names.
Use --key-ref (or -k) with the format provider:ENV_VAR:
# Single provider
prompt-lab run experiments/my-experiment -k openai:MY_OPENAI_KEY
# Multiple providers
prompt-lab run experiments/my-experiment \
--key-ref openai:TEAM_OPENAI_KEY \
--key-ref anthropic:TEAM_ANTHROPIC_KEYAdd key_refs to the frontmatter:
---
name: my-experiment
models:
- openai:gpt-4o-mini
- anthropic:claude-sonnet-4-20250514
key_refs:
openai: TEAM_OPENAI_KEY
anthropic: TEAM_ANTHROPIC_KEY
---CLI --key-ref flags override key_refs in experiment.md, which override the defaults. Custom key refs apply to all provider calls, including judge evaluation.
LLM-as-judge evaluation methodology and best practices:
- Evidently AI - LLM-as-a-Judge Complete Guide
- Sebastian Raschka - Understanding the 4 Main Approaches to LLM Evaluation
- Eugene Yan - Evaluating the Effectiveness of LLM-Evaluators
- Monte Carlo - LLM-as-Judge: 7 Best Practices
- Arize AI - Evidence-Based Prompting Strategies for LLM-as-a-Judge
- A Survey on LLM-as-a-Judge (2024)
- Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge