curl for prompts. Run .prompt files against any LLM from the terminal.
Prompts are code. Treat them like it.
- Privacy & Security
- Quick start
- Why?
- Why not LangChain / promptfoo / Langfuse?
- Install
- The
.promptfile format - Commands
- Use as a Python library
- Provider setup
- Use in CI / GitHub Actions
- Examples
- Development setup
- Contributing
- Changelog
- Security
- License
prompt-run runs entirely on your machine. It is a local CLI tool with no backend, no telemetry, and no cloud component of its own.
| API keys | Read from environment variables, passed directly to your chosen provider. Never stored, logged, or sent anywhere else. |
| Prompts & outputs | Stay on your machine. The only server that sees them is the AI provider you explicitly call. |
| Telemetry | None. Zero usage data, no crash reports, no background calls, no tracking of any kind. |
| Accounts | Not required. There is no prompt-run account or sign-up. |
When you run prompt run, the only network traffic is the request you intentionally send to your chosen AI provider.
1. Install
pip install "prompt-run[anthropic]"
export ANTHROPIC_API_KEY="sk-ant-..."2. Run an example prompt from this repo
prompt run examples/summarize.prompt --var text="LLMs are changing how developers build software."3. Try streaming, dry-run, and diff
# Stream tokens as they arrive
prompt run examples/summarize.prompt --var text="Your text here" --stream
# Preview the resolved prompt without calling the LLM
prompt run examples/summarize.prompt --var text="Your text here" --dry-run
# Compare two inputs side by side
prompt diff examples/summarize.prompt \
--a-var text="First article content..." \
--b-var text="Second article content..."4. Write your own
prompt new my-prompt.prompt # interactive wizard
prompt run my-prompt.prompt --var input="hello"That's it. No config files, no accounts, no platform.
Every team building with LLMs ends up with the same mess — prompts buried in Python strings, Notion docs, and Slack threads. No history. No review. No way to test them.
prompt-run fixes this by giving prompts a home: .prompt files.
- ✅ Committed alongside code in git
- ✅ Reviewed in PRs like any other file
- ✅ Swappable across models and providers without touching application code
- ✅ Runnable from the terminal or CI with one command
| prompt-run | LangChain | promptfoo | Langfuse | |
|---|---|---|---|---|
| Prompt format | Plain .prompt file |
Python code | YAML config | Web UI / SDK |
| Works in terminal | ✅ | ❌ | ✅ | ❌ |
| Works as Python library | ✅ | ✅ | ❌ | ✅ |
| No framework lock-in | ✅ | ❌ | ✅ | ❌ |
| Diff two prompt outputs | ✅ | ❌ | Partial (web UI) | ❌ |
| Pipe stdin / shell-friendly | ✅ | ❌ | ❌ | ❌ |
| Works offline (Ollama) | ✅ | ✅ | ✅ | ❌ |
| Zero config beyond API key | ✅ | ❌ | ❌ | ❌ |
| Prompt lives in git | ✅ | Partial | ✅ | Partial |
prompt-run is a single-purpose tool — it does one thing well and stays out of your stack. No agents, no chains, no platform.
pip install prompt-run
# With provider SDKs (pick what you need):
pip install "prompt-run[anthropic]" # Anthropic Claude
pip install "prompt-run[openai]" # OpenAI / Azure
pip install "prompt-run[all]" # EverythingSet your API key:
macOS / Linux
export ANTHROPIC_API_KEY="sk-ant-..." # for Anthropic
export OPENAI_API_KEY="sk-..." # for OpenAI
# Ollama needs no key — just run `ollama serve`Windows (PowerShell)
$env:ANTHROPIC_API_KEY = "sk-ant-..." # for Anthropic
$env:OPENAI_API_KEY = "sk-..." # for OpenAI
# Ollama needs no key — just run `ollama serve`A .prompt file has two parts: YAML frontmatter and a plain text body.
---
name: summarize
description: Summarizes text into bullet points
model: claude-sonnet-4-6
provider: anthropic
temperature: 0.3
max_tokens: 512
vars:
text: string
style: string = bullet points
max_bullets: int = 5
---
Summarize the following text as {{style}}.
Use no more than {{max_bullets}} bullets.
Text:
{{text}}
Variables use {{double braces}}. Defaults are declared in frontmatter. Everything is overridable at the CLI.
| Field | Type | Default | Description |
|---|---|---|---|
name |
string | filename | Human name for this prompt |
description |
string | — | What this prompt does |
provider |
string | anthropic |
anthropic / openai / ollama |
model |
string | provider default | Model to use |
temperature |
float | 0.7 |
Randomness (0.0–2.0) |
max_tokens |
int | 1024 |
Max output tokens |
system |
string | — | System prompt |
vars |
map | — | Variable declarations with types/defaults |
vars:
text: string # required string
count: int = 5 # optional int, defaults to 5
verbose: bool = false # optional bool
ratio: float = 0.5 # optional floatScaffold a new .prompt file interactively — no YAML knowledge needed.
prompt new # guided wizard, prints to stdout
prompt new summarize.prompt # guided wizard, writes to fileYou'll be asked for name, description, provider, model, temperature, and variables. The file is ready to run immediately.
Run a .prompt file against an LLM.
# Basic
prompt run summarize.prompt --var text="Hello world"
# Multiple vars
prompt run translate.prompt --var text="Bonjour" --var target_lang=English
# Override model/provider at runtime
prompt run summarize.prompt --model gpt-4o --provider openai
# Pipe stdin (auto-detected for single required var)
cat article.txt | prompt run summarize.prompt
# Stream tokens as they arrive
prompt run summarize.prompt --var text="test" --stream
# Save response to a file
prompt run summarize.prompt --var text="test" --output summary.txt
# Preview the resolved prompt without sending
prompt run summarize.prompt --var text="test" --dry-run
# Get JSON output with metadata
prompt run summarize.prompt --var text="test" --jsonFlags
| Flag | Description |
|---|---|
--var KEY=VALUE |
Pass a variable (repeatable) |
--model MODEL |
Override model |
--provider PROVIDER |
Override provider |
--temperature FLOAT |
Override temperature |
--max-tokens INT |
Override max tokens |
--system TEXT |
Override system prompt |
--stream |
Stream tokens to stdout as they arrive |
--stdin-var VAR |
Pipe stdin into a specific variable |
--output FILE |
Write response to file instead of stdout |
--dry-run |
Print resolved prompt, don't call LLM |
--json |
Return JSON with response + token metadata |
--show-prompt |
Print resolved prompt before response |
Run a prompt with two different inputs and compare outputs side by side.
# Same prompt, two different inputs
prompt diff summarize.prompt \
--a-var text="First article content here..." \
--b-var text="Second article content here..."
# Two prompt versions, same input (A/B testing a prompt change)
prompt diff prompts/v1/summarize.prompt prompts/v2/summarize.prompt \
--var text="$(cat article.txt)"Output:
┌─ v1/summarize.prompt ──────────┬─ v2/summarize.prompt ──────────┐
│ • The company reported record │ 1. Record quarterly revenue of │
│ revenue this quarter. │ $2.1B, up 14% YoY. │
│ • Growth was driven by cloud │ 2. Cloud services drove growth, │
│ services. │ up 32%. │
└────────────────────────────────┴────────────────────────────────┘
Tokens — A: 312 in / 48 out | B: 318 in / 61 out
Static check one or more .prompt files without calling any LLM.
prompt validate summarize.prompt
prompt validate prompts/*.prompt # glob supportChecks (these are errors — validation fails):
- Valid YAML frontmatter
- Known provider name (
anthropic,openai,ollama) - Temperature in range
0.0–2.0 - Body is not empty
max_tokensis at least 1
Checks (these are warnings — validation passes with a note):
- Variables used in body but not declared in frontmatter
- Variables declared in frontmatter but never used in body
Show metadata and the fully-resolved prompt body.
prompt inspect summarize.prompt
prompt inspect summarize.prompt --var text="Hello world"from prompt_run import run_prompt_file, RunConfig
config = RunConfig(
vars={"text": "My article content here..."},
model="claude-sonnet-4-6",
)
result = run_prompt_file("summarize.prompt", config)
print(result.response.content)
print(result.response.token_summary)Parse and render without calling LLM:
from prompt_run import parse_prompt_file, render_prompt
pf = parse_prompt_file("summarize.prompt")
system, body = render_prompt(pf, {"text": "hello"})
print(body)macOS / Linux
export ANTHROPIC_API_KEY="sk-ant-..."Windows (PowerShell)
$env:ANTHROPIC_API_KEY = "sk-ant-..."prompt run my.prompt --provider anthropic --model claude-sonnet-4-6macOS / Linux
export OPENAI_API_KEY="sk-..."Windows (PowerShell)
$env:OPENAI_API_KEY = "sk-..."prompt run my.prompt --provider openai --model gpt-4oollama serve # in another terminal
ollama pull llama3
prompt run my.prompt --provider ollama --model llama3Tip — persist API keys across sessions: Add the
exportlines to your~/.bashrc,~/.zshrc, or$PROFILE(PowerShell) so you don't need to set them each time.
- name: Validate all prompts
run: prompt validate prompts/*.prompt
- name: Test prompt output
run: |
output=$(prompt run prompts/classify.prompt \
--var text="I love this product!" \
--var categories="positive,negative,neutral")
echo "Classification: $output"The examples/ folder contains ready-to-run .prompt files:
| File | Description |
|---|---|
| summarize.prompt | Summarize text into bullet points |
| translate.prompt | Translate text to any language |
| classify.prompt | Classify text into categories |
| extract-json.prompt | Extract structured JSON from text |
Everything you need to go from zero to running tests locally.
- Python 3.11 or later — python.org/downloads
- git
- (Optional) make — convenience wrapper; all commands also work without it
git clone https://github.com/Maneesh-Relanto/Prompt-Run
cd Prompt-RunmacOS / Linux
python -m venv .venv
source .venv/bin/activateWindows (PowerShell)
python -m venv .venv
.venv\Scripts\Activate.ps1pip install -e ".[dev]"
# or
make installThis installs:
prompt-runitself in editable mode (changes to source take effect immediately)pytest,pytest-covfor testinganthropicandopenaiSDKs (used in tests via mocks — no API key needed)types-PyYAMLfor mypy
pytest # run all 113 tests
pytest -v # verbose output
pytest tests/test_parser.py # single module
pytest -k "test_dry_run" # filter by name
pytest --tb=short -q # compact (same as CI)
# or
make testTests never call real LLM APIs — all providers are fully mocked. No API key required.
make lint # ruff check + ruff format --check + mypy --strict
make format # auto-fix formatting with ruffOr manually:
ruff check .
ruff format --check .
mypy prompt_run --ignore-missing-importsprompt validate examples/*.promptAll four examples should pass with no errors.
make test # all tests pass
make lint # no ruff or mypy errors
prompt validate examples/*.prompt # example prompts still validThen open a pull request. See CONTRIBUTING.md for the full PR checklist, good first issues, and how to add a new provider.
See CONTRIBUTING.md.
See CHANGELOG.md for a full history of releases and changes.
See SECURITY.md for the supported versions and vulnerability reporting policy.
MIT