A developer-friendly command-line tool for managing, testing, and comparing prompt templates with OpenAI's GPT models—designed to make prompt engineering reproducible, versionable, and fast.
- Run prompt templates with dynamic input
- Compare multiple prompt versions side-by-side
- Track token usage and estimate API cost
- Log results to .txt and .csv files
- Scaffold new prompts using a consistent YAML format
git clone https://github.com/your-username/prompt-playground-cli.git cd prompt-playground-cli
python -m venv venv source venv/Scripts/activate # On Windows
pip install -r requirements.txt
pip install -e .
prompt-cli new summarize-v1
Example prompt file (YAML)
name: summarize-v1 model: gpt-3.5-turbo temperature: 0.7 prompt: | Summarize this in one sentence: {input}
prompt-cli run summarize-v1.yaml --input "AI is transforming healthcare."
prompt-cli compare --input "Summarize AI trends in 2024." summarize-v1.yaml summarize-v2.yaml
Saves .txt and .csv logs to .prompt-history/
prompt-cli list
Create a .env file in your project root:
OPENAI_API_KEY=your-openai-api-key
Or set it in your terminal session:
$env:OPENAI_API_KEY="your-openai-api-key" # Windows PowerShell
prompt-playground-cli/ ├── prompt_cli/ # CLI source code ├── prompts/ # Prompt YAML templates ├── .prompt-history/ # Output logs and comparison results ├── requirements.txt ├── setup.cfg ├── README.md └── .gitignore
MIT License