Skip to content

A declarative prompt engineering framework that transforms high-level DSL definitions into optimized, model-specific LLM prompts.

License

Notifications You must be signed in to change notification settings

marcosjimenez/pCompiler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pCompiler

Version Python License

pCompiler is a declarative prompt engineering framework that transforms high-level DSL definitions into optimized, model-specific LLM prompts. It bridges the gap between raw text prompting and structured, versioned, and secure prompt management.

🚀 Key Features

  • Declarative YAML DSL: Define prompts as typed, versionable specifications.
  • Context Engineering (RAG): Dynamic retrieval from static text, local files, vector stores, and web search.
  • Auto-Evals System: Built-in automated metrics (exact_match, regex) and LLM-as-a-judge for quantitative prompt refinement.
  • Multi-Model Optimization: Auto-reordering, semantic compression, and Chain-of-Thought (CoT) policies tailored for OpenAI, Anthropic, Gemini, and more.
  • Built-in Security: Anti-injection policies, system/user separation, and input sanitization.
  • Deep Observability: Full compilation traces, SHA-256 versioning, and reproducibility logs.
  • Extensible Plugin System: Easily add custom backends, optimizers, or context providers.

📦 Installation

# Clone the repository
git clone https://github.com/marcosjimenez/pCompiler.git
cd pCompiler

# Install in editable mode
pip install -e "."

🛠 Quick Start

  1. Define your prompt (summarize.yaml):
task: summarize
model_target: gpt-4o
version: "1.2.0"

context:
  sources:
    - type: static
      value: "The user is a legal expert."
    - type: local_file
      value: "knowledge_base.txt"
  max_total_tokens: 1500

instructions:
  - text: "Summarize the contract focusing on liability clauses."
    priority: 100
  - text: "Use formal legal terminology."
    priority: 80

evals:
  threshold: 0.9
  cases:
    - name: "Short liability"
      input: { input: "Clause 1: Party A is liable for..." }
      expected: "Liability assigned to Party A"
      metrics: [includes, llm_judge]
  1. Compile it:
# Generate the optimized payload
pcompile compile summarize.yaml --target gpt-4o
  1. Validate it:
# Check for ambiguities, contradictions, and injection risks
pcompile validate summarize.yaml
  1. Run Evals:
# Run automated tests to ensure quality
pcompile eval summarize.yaml --mock

📖 Documentation

🐍 Python API

from pcompiler.compiler import PromptCompiler

compiler = PromptCompiler()

# Compile a file
result = compiler.compile_file("summarize.yaml", target="claude-3-5-sonnet")

print(f"Compiled Prompt:\n{result.prompt_text}")
print(f"API Payload: {result.payload}")

🧪 Testing

python -m pytest tests/ -v

Created by the pCompiler Team. Optimize your prompts, automate your evaluations.

About

A declarative prompt engineering framework that transforms high-level DSL definitions into optimized, model-specific LLM prompts.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages