Centralize prompts, system instructions, tools, and model settings — without leaving your codebase.
Your prompts are already in Git. PromptOpsKit makes them manageable. It replaces hardcoded strings and scattered provider-specific glue with structured Markdown files where prompt text, model settings, sampling parameters, tool bindings, environment overrides, and composable shared instructions all live together — diffable, reviewable, and release-aware.
Provider adapters for OpenAI, Anthropic, Gemini, and OpenRouter produce a ready-to-send request body only — no HTTP client, no auth, no headers. Your application owns transport, so PromptOpsKit slots into any stack without opinions about how you call the API.
- Centralized, not scattered — each prompt is a single Markdown file that captures prompt text, model config, tool bindings, and context rules together.
- Operational, not just templated — model name, temperature, reasoning effort, tools, and response format are declared alongside the prompt they govern.
- Reusable, not duplicated —
includeslets you define shared tone, policy, or safety instructions once and compose them into any prompt. - Release-aware, not ad hoc — environment and tier overrides swap models and parameters without forking prompt files.
- Provider-portable — write once, render for OpenAI, Anthropic, Gemini, or OpenRouter with correct body shapes.
- Validate early — Zod schema validation, Levenshtein-based "did you mean?" suggestions for typos, and variable usage checks catch mistakes before runtime.
- Compile for production — pre-compile
.mdto JSON or ESM so deployments skip parsing entirely. - Repo-native, not dashboard-native — no hosted service, no external admin tool. Everything lives in source control.
npm install promptopskitnpx promptopskit init ./promptsThis creates:
prompts/
├── hello.md # Sample prompt with variables
├── hello.test.yaml # Test sidecar with sample inputs
└── shared/
└── tone.md # Shared system instructions (included via composition)
---
id: support.reply
schema_version: 1
provider: openai
model: gpt-5.4
reasoning:
effort: medium
sampling:
temperature: 0.7
context:
inputs:
- user_message
- app_context
includes:
- ./shared/tone.md
---
# System instructions
You are a helpful support assistant working in {{ app_context }}.
# Prompt template
{{ user_message }}import { createPromptOpsKit } from 'promptopskit';
const kit = createPromptOpsKit({ sourceDir: './prompts' });
const result = await kit.renderPrompt({
path: 'support/reply',
provider: 'openai',
variables: {
user_message: 'How do I reset my password?',
app_context: 'Account settings page',
},
});
// result.request.body is ready for fetch()
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify(result.request.body),
});- Prompts as Markdown — YAML front matter for settings, H1 headings for sections (
# System instructions,# Prompt template,# Notes) - Variable interpolation —
{{ variable }}syntax with strict and permissive modes - Composition —
includesto share system instructions across prompts, with circular detection - Overrides — Environment and tier-based overrides (base → env → tier → runtime)
- 4 provider adapters — OpenAI, Anthropic, Gemini, OpenRouter — body-only output
- Validation — Zod schema validation, Levenshtein-based "did you mean?" for typos, variable usage checks
- Caching — LRU cache with mtime-based invalidation
- CLI — init, validate, compile, render, inspect, skill
- Compiled artifacts — Pre-compile
.md→ JSON or ESM for production
Each adapter produces a { body, provider, model } object shaped for the target API. You handle the HTTP call.
// OpenAI
import { createPromptOpsKit } from 'promptopskit';
const kit = createPromptOpsKit({ sourceDir: './prompts' });
const { request } = await kit.renderPrompt({
path: 'hello',
provider: 'openai',
variables: { name: 'World', app_context: 'Welcome screen' },
});
// request.body → { model, messages, temperature, reasoning_effort, ... }
// Anthropic — system is a top-level field, max_tokens defaults to 4096
const { request } = await kit.renderPrompt({
path: 'hello',
provider: 'anthropic',
variables: { name: 'World', app_context: 'Welcome screen' },
});
// request.body → { model, messages, system, max_tokens, ... }
// Gemini — contents/systemInstruction/generationConfig structure
const { request } = await kit.renderPrompt({
path: 'hello',
provider: 'gemini',
variables: { name: 'World', app_context: 'Welcome screen' },
});
// request.body → { contents, systemInstruction, generationConfig, ... }
// OpenRouter — same shape as OpenAI, different provider label
const { request } = await kit.renderPrompt({
path: 'hello',
provider: 'openrouter',
variables: { name: 'World', app_context: 'Welcome screen' },
});Provider adapters are also available as direct imports:
import { openaiAdapter } from 'promptopskit/openai';
import { anthropicAdapter } from 'promptopskit/anthropic';
import { geminiAdapter } from 'promptopskit/gemini';
import { openrouterAdapter } from 'promptopskit/openrouter';Define environment and tier overrides in front matter. Precedence: base → environment → tier → runtime. Scalars and arrays are replaced, not merged.
---
id: support.reply
schema_version: 1
provider: openai
model: gpt-5.4
sampling:
temperature: 0.7
environments:
dev:
model: gpt-5.4-mini
sampling:
temperature: 0.2
prod:
model: gpt-5.4
tiers:
free:
model: gpt-5.4-mini
pro:
model: gpt-5.4
---const result = await kit.renderPrompt({
path: 'support/reply',
provider: 'openai',
environment: 'dev',
tier: 'pro',
variables: { user_message: '...' },
});Share system instructions across prompts using includes. Included system instructions are prepended before local ones.
---
id: support.reply
schema_version: 1
includes:
- ./shared/tone.md
---
# System instructions
Handle support requests carefully.# Scaffold starter prompts
promptopskit init [dir]
# Validate all .md files in a directory
promptopskit validate <dir> [--strict]
# Compile .md → JSON/ESM artifacts
promptopskit compile <src> <out> [--dry-run] [--format json|esm] [--no-clean]
# Render a prompt preview (auto-loads .test.yaml sidecar)
promptopskit render <file> [--env <name>] [--tier <name>] [--vars <file>] [--json]
# Print normalized asset as JSON
promptopskit inspect <file>
# Deploy AI agent instructions into your project
promptopskit skill [--target copilot|cursor|generic] [--force]The skill command deploys a comprehensive instructions file so AI coding assistants (GitHub Copilot, Cursor, etc.) automatically understand how to create and manage prompts with promptopskit.
# Deploy for GitHub Copilot (default)
promptopskit skill
# → .github/instructions/promptopskit.instructions.md
# Deploy for Cursor
promptopskit skill --target cursor
# → .cursor/rules/promptopskit.mdc
# Deploy a generic instructions file
promptopskit skill --target generic
# → .ai/promptopskit-skill.md
# Overwrite an existing instructions file
promptopskit skill --forceThe deployed file covers the prompt format, front matter schema, variable interpolation, includes, overrides, the TypeScript API, provider adapters, and project conventions — everything an AI agent needs to write correct prompts on the first try.
Render prompts from strings without files:
const result = await kit.renderPrompt({
source: `---
id: inline
schema_version: 1
provider: openai
model: gpt-5.4
---
# Prompt template
Hello {{ name }}!`,
provider: 'openai',
variables: { name: 'World' },
});import { createMockAsset, createMockResolvedAsset, parseTestPrompt } from 'promptopskit/testing';
const asset = createMockAsset({ model: 'gpt-5.4' });
const resolved = createMockResolvedAsset();
const parsed = parseTestPrompt('---\nid: test\nschema_version: 1\n---\n\nHello');Creates a PromptOpsKit instance.
| Option | Type | Default | Description |
|---|---|---|---|
sourceDir |
string |
— | Path to prompt .md files (required) |
compiledDir |
string |
— | Path to compiled artifacts |
mode |
'auto' | 'compiled-only' | 'source-only' |
'auto' |
Resolution strategy |
cache |
boolean |
true |
Enable LRU cache with mtime invalidation |
Renders a prompt for a specific provider. Returns { resolved, request, warnings }.
| Option | Type | Description |
|---|---|---|
path |
string |
Prompt path (no extension), e.g. 'support/reply' |
source |
string |
Inline prompt source (alternative to path) |
provider |
string |
'openai', 'anthropic', 'gemini', 'openrouter' |
variables |
Record<string, string> |
Template variables |
environment |
string |
Environment override name |
tier |
string |
Tier override name |
history |
Array<{ role, content }> |
Conversation history |
strict |
boolean |
Fail on missing variables |
Lower-level methods for loading, resolving (includes + overrides), and validating individual prompts.
import { parsePrompt, interpolate, extractVariables, resolveIncludes, applyOverrides, validateAsset, getAdapter } from 'promptopskit';Prompt files use YAML front matter with these fields:
| Field | Type | Description |
|---|---|---|
id |
string |
Unique prompt identifier (required) |
schema_version |
number |
Schema version, currently 1 |
provider |
string |
openai, anthropic, gemini (or google), openrouter, any |
model |
string |
Model name |
fallback_models |
string[] |
Fallback model list |
reasoning |
object |
{ effort, budget_tokens } |
sampling |
object |
{ temperature, top_p, frequency_penalty, presence_penalty, stop, max_output_tokens } |
response |
object |
{ format, stream } |
tools |
array |
Tool references (string names or inline definitions) |
mcp |
object |
MCP server references |
context |
object |
{ inputs, history } — declare expected variables |
includes |
string[] |
Paths to included prompt files |
environments |
object |
Named environment overrides |
tiers |
object |
Named tier overrides |
metadata |
object |
{ owner, tags, review_required, stable } |
The website/ directory contains a standalone marketing website for PromptOpsKit.