▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
████████████████████████████████████████████████
██ ██████ ██████ ███████
██ ██◉ ◉██ ██ ██ ██
██ ██ ▽▽ ██ ██████ █████
██ ██ ◡◡ ██ ██ ██
██████ ██████ ██ ███████
████████████████████████████████████████████████
▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
any cli implements · any cli validates
Autonomous sprint runner with multi-CLI validator ensemble.
One AI CLI implements. Others review. Majority vote. No single-model blindspot.
Not just for code. Lope works for engineering, business (marketing, finance, ops, consulting), and research (systematic reviews, protocols, academic work). The same validator loop that catches bugs in code also catches gaps in budgets, timeline assumptions, methodology rigor, and audience targeting. See Use cases for 9 worked examples across all three domains.
Zero external dependencies. Pure Python stdlib. MIT license.
You (in Claude Code): /lope-negotiate "Add JWT auth with refresh tokens"
Round 1 drafter proposes sprint doc (4 phases)
Round 1 opencode + vibe + gemini review... NEEDS_FIX (0.78)
- Missing rate limiting on refresh endpoint
- No test for token expiry edge case
Round 2 drafter revises... applies both fixes
Round 2 opencode + vibe + gemini review... PASS (0.93)
Saved: SPRINT-JWT-AUTH.md
You: /lope-execute SPRINT-JWT-AUTH.md
Phase 1 scaffold ................ PASS 0.95 12s
Phase 2 core-middleware ......... NEEDS_FIX 0.71
- Token expiry check missing in middleware/auth.go:142
→ applying fix...
Phase 2 re-validate ............. PASS 0.89 41s
Phase 3 refresh-rotation ........ PASS 0.88 28s
Phase 4 integration-tests ....... PASS 0.94 19s
4/4 PASS | avg confidence 0.91 | 134s
Once lope is installed, your AI agent knows lope exists. You can just talk to it and it will invoke lope on your behalf when the request fits. No command memorization, no flag lookup.
You: Let's plan the auth refactor with JWT refresh rotation. I want
gemini and opencode to both sign off before I touch any code.
Agent: [recognizes multi-phase work → invokes /lope-negotiate]
Running: lope negotiate "JWT auth refactor with refresh rotation" \
--domain engineering --max-rounds 3
Drafter: claude-code · Reviewers: gemini-cli, opencode
Round 1 drafter proposes sprint doc (3 phases)
Round 1 gemini-cli + opencode review... NEEDS_FIX (0.74)
- Refresh endpoint missing rate limiting
- Phase 2 criteria are vague about refresh-token reuse
Round 2 drafter revises
Round 2 gemini-cli + opencode review... PASS (0.91)
Saved: SPRINT-JWT-AUTH-REFRESH-ROTATION.md
Natural language → lope invocation examples:
| You say | Agent runs |
|---|---|
| "Draft 3 blog posts about lope for the launch" | lope negotiate "Draft 3 launch blog posts" --domain business |
| "Plan the Q4 marketing campaign carefully" | lope negotiate "Q4 marketing campaign" --domain business |
| "Scope the data ingestion rewrite" | lope negotiate "Data ingestion pipeline rewrite" --domain engineering |
| "Systematic review of post-training RL papers" | lope negotiate "Systematic review of post-training RL" --domain research |
| "Work out a GDPR compliance audit plan" | lope negotiate "GDPR compliance audit" --domain business |
The trigger words your agent watches for: plan, negotiate, scope, draft, roll out, work through, "carefully", "needs to be right", "don't break things". The agent pairs those with a multi-phase task shape (3+ steps, multiple files, cross-model-verification value) and invokes lope negotiate without you having to remember the slash command.
Explicit slash commands (/lope-negotiate, /lope-execute, /lope-audit) still work — they're the precise path when you want to control the exact arguments. Natural language is the lazy path when you just want to plan something.
When you talk to your agent, the using-lope auto-trigger skill fires. It's a meta-skill installed alongside the explicit slash commands. Its job is to recognize the shape of your request and invoke the right lope mode for you. If the request is a single edit, a trivial fix, or pure conversation, using-lope stays out of the way — it's specifically scoped to multi-phase consequential work. You will not get a sprint negotiation for "rename this variable".
See skills/using-lope/SKILL.md for the full trigger logic and anti-patterns.
Open your AI agent (Claude Code, Codex, Cursor, Gemini CLI, OpenCode, GitHub Copilot CLI — whichever you already use) and paste this prompt:
Read https://raw.githubusercontent.com/traylinx/lope/main/INSTALL.md and follow the instructions to install lope on this machine natively.
That's it. Your agent fetches a single markdown file, follows six short steps, and reports back when lope is live. The install recipe is CLI-agnostic — it writes lope's slash commands into each host's native command directory using the format that host expects. Restart your CLI once to pick up the new slash commands.
Requirements: git, python3 ≥ 3.9, bash ≥ 3.2. That's all.
What gets installed:
| Host | Path | Format |
|---|---|---|
| Claude Code | ~/.claude/skills/lope*/ |
skill dirs |
| Codex | ~/.codex/skills/lope*/ |
skill dirs |
| Gemini CLI | ~/.gemini/commands/lope/*.toml |
TOML commands |
| OpenCode | ~/.config/opencode/command/lope*.md |
flat markdown |
| Cursor | ~/.cursor/agents/lope*.md |
flat markdown |
Hosts you don't have installed are skipped silently.
git clone --depth 1 https://github.com/traylinx/lope.git ~/.lope
~/.lope/installTarget a single host:
~/.lope/install --host codex
~/.lope/install --host geminiThen add a shell alias so you can just type lope:
echo "alias lope='PYTHONPATH=~/.lope python3 -m lope'" >> ~/.zshrcCheck what validators lope found on your machine:
lope statusPick which ones to use:
lope configure NEGOTIATE VALIDATE EXECUTE AUDIT
───────── ──────── ─────── ─────
LLM drafts ───> Other CLIs ───> Phase by ───> Scorecard
sprint doc review & vote phase with + journal
(majority vote) retry on
NEEDS_FIX
<─── NEEDS_FIX ────┘
Negotiate: An LLM drafts a structured sprint doc (phases, goals, criteria). Validators push back on scope creep, missing edge cases, unverified assumptions. The LLM revises until PASS or max rounds.
Execute: Phase-by-phase implementation with validation after each phase. PASS advances. NEEDS_FIX retries with specific fix instructions (up to 3 attempts). FAIL escalates to you.
Audit: Scorecard with per-phase verdicts, confidence scores, duration, and overall status.
Three modes: lope negotiate, lope execute, lope audit.
Auto-detected built-in CLIs — run lope status and lope finds whatever is on your PATH:
| CLI | Binary | Command |
|---|---|---|
| Claude Code | claude |
claude --print |
| OpenCode | opencode |
opencode run --format json |
| Gemini CLI | gemini |
gemini --prompt |
| Codex (OpenAI) | codex |
codex exec --quiet |
| Mistral Vibe | vibe |
vibe run "{prompt}" |
| Aider | aider |
aider --message --no-git --yes |
| Ollama | ollama |
local, zero auth |
| Goose (Block) | goose |
goose run --text |
| Open Interpreter | interpreter |
interpreter --fast -y |
| llama.cpp | llama-cli |
fastest local inference |
| GitHub Copilot CLI | gh copilot |
gh copilot suggest |
| Amazon Q | q |
q chat |
You need at least one. Install whatever you already use.
Five lines of JSON in ~/.lope/config.json:
{
"version": 1,
"validators": ["claude", "ollama-qwen"],
"providers": [
{
"name": "ollama-qwen",
"type": "subprocess",
"command": ["ollama", "run", "qwen3:8b", "{prompt}"]
}
]
}Two provider types cover everything:
| Type | Use for |
|---|---|
subprocess |
CLI tools — Ollama, Goose, llama.cpp, any binary |
http |
API endpoints — OpenAI, Anthropic, Groq, self-hosted |
HTTP example (Anthropic):
{
"name": "anthropic-api",
"type": "http",
"url": "https://api.anthropic.com/v1/messages",
"headers": {
"x-api-key": "${ANTHROPIC_API_KEY}",
"anthropic-version": "2023-06-01",
"Content-Type": "application/json"
},
"body": {
"model": "claude-sonnet-4-5",
"max_tokens": 4096,
"messages": [{"role": "user", "content": "{prompt}"}]
},
"response_path": "content.0.text" // dot-path into the JSON response
}The only contract: the response must contain a ---VERDICT---...---END--- block. Add a prompt_wrapper if the model needs explicit instructions:
{
"name": "my-llm",
"prompt_wrapper": "Respond with a VERDICT block at the end:\n{prompt}",
"type": "http",
...
}Security: subprocess runs with shell=False. ${VAR} is forbidden in command args and URLs (prevents key leakage into ps or server logs). HTTP body encoding prevents injection.
from lope import Negotiator, PhaseExecutor, Auditor, ValidatorPool
from lope.validators import ClaudeCodeValidator, OpencodeValidator
pool = ValidatorPool(
validators=[ClaudeCodeValidator(), OpencodeValidator()],
primary="claude",
)After install, these work in any supported CLI host:
| Command | What it does |
|---|---|
/lope-negotiate |
Draft a sprint doc with multi-round validator review |
/lope-execute |
Run sprint phases with validator-in-the-loop retry |
/lope-audit |
Generate scorecard from sprint results |
Show detected CLIs and current config.
Interactive validator picker. Auto-detects installed CLIs.
lope negotiate "Add rate limiting to the API gateway" \
--out SPRINT-RATE-LIMIT.md \
--max-rounds 3 \
--context "Express.js, Redis"Pass --domain business or --domain research to switch the validator role and review criteria.
lope execute SPRINT-RATE-LIMIT.mdGenerate scorecard. --no-journal skips writing to the journal file.
| Variable | Default | Description |
|---|---|---|
LOPE_HOME |
~/.lope |
Config and journal directory |
LOPE_WORKDIR |
Current directory | Working directory for validators |
LOPE_TIMEOUT |
480 |
Validator timeout (seconds) |
LOPE_CAVEMAN |
full |
Token compression: full, lite, or off |
LOPE_LLM_URL |
(unset) | Optional fallback — hosted OpenAI-compatible endpoint, used only if the primary validator does not support drafting. Normally you do not need this. |
LOPE_LLM_MODEL |
gpt-4o-mini |
Model name when LOPE_LLM_URL fallback is used. |
LOPE_LLM_API_KEY |
(unset) | Bearer token for the fallback endpoint. Falls back to OPENAI_API_KEY. |
LOPE_RUN_LOCK |
(on) | Set to off to disable the run lock (CI, deliberate parallelism). |
LOPE_RUN_LOCK_WAIT |
(unset) | Seconds to block when another lope run holds the lock. 0 = wait forever. Default: fail fast. |
LOPE_RUN_LOCK_PATH |
$LOPE_HOME/run.lock |
Override the lockfile path (used by tests). |
No separate LLM required. Lope's premise is any CLI implements, any CLI validates. Drafting is just the primary CLI implementing.
lope negotiatecalls the primary validator (claude,opencode,gemini-cli,codex, oraider) as a subprocess to draft the sprint doc, then routes the draft to the other validators for review. You only need to setLOPE_LLM_URLif your primary validator cannot draft (e.g. a custom HTTP provider that only reviews).
Ensemble (parallel: true, default): all validators run concurrently. Majority vote. Any FAIL vetoes. Ties resolve to NEEDS_FIX.
Fallback (parallel: false): primary first, next on infra error. First PASS/NEEDS_FIX/FAIL halts chain.
Lope holds a file lock ($LOPE_HOME/run.lock) for the lifetime of every negotiate and execute command. Without it, two parallel runs each spawn 3–4 validator CLIs, fight over the same auth tokens, and stall out with fake INFRA_ERROR timeouts.
Default behavior: a second caller fails fast with exit 75 (EX_TEMPFAIL) and a clear message showing the holder's pid and command.
# Queue the second caller instead of failing — block up to 5 minutes
LOPE_RUN_LOCK_WAIT=300 lope negotiate "second goal"
# Disable the lock entirely (CI, tests, deliberate parallelism)
LOPE_RUN_LOCK=off lope execute SPRINT.mdRead-only commands (status, configure, audit, docs, version, install) do not touch the lock.
By default, lope tells validators to respond in terse fragments — drop articles, filler, hedging. Code, paths, line numbers, and error messages stay exact. This cuts validator response tokens by 50-65%, which matters when you're running N validators × M phases × up to 3 retries per sprint.
LOPE_CAVEMAN=off lope negotiate "..." # verbose responses
LOPE_CAVEMAN=lite lope negotiate "..." # drops filler only, keeps full sentencesAdapted from JuliusBrussee/caveman (MIT).
Every validator response must contain:
---VERDICT---
status: PASS | NEEDS_FIX | FAIL
confidence: 0.0-1.0
rationale: 1-3 sentences
required_fixes:
- fix 1
- fix 2
---END---
confidence < 0.7 on a PASS is automatically demoted to NEEDS_FIX. Missing or malformed block → INFRA_ERROR (never raises, falls through to next validator).
Lope is not just for code. The --domain flag switches the validator role, artifact labels, and review task for the context you're working in:
engineering(default) — code, software, infra, devopsbusiness— marketing, finance, ops, consulting, managementresearch— studies, systematic reviews, academic work
Nine worked examples below. Each is a real sprint goal you can paste into lope negotiate.
lope negotiate "Add JWT auth with refresh token rotation" --domain engineering
lope negotiate "Migrate from REST to gRPC for internal services" --domain engineering
lope negotiate "Rate-limit the public API gateway — per-user + per-IP" --domain engineeringValidators check: file paths, test coverage, edge cases, error handling, backward compatibility, rollback plan.
lope negotiate "Q4 product launch campaign for SaaS enterprise tier" --domain business
lope negotiate "LinkedIn thought leadership sequence — 8 posts over 4 weeks" --domain business
lope negotiate "Rebranding sprint: logo, site, positioning, migration" --domain businessValidators check: target audience, message-market fit, channel mix, success metrics, timeline realism, budget allocation.
lope negotiate "Q2 budget rebuild across 4 cost centers with runway analysis" --domain business
lope negotiate "Month-end close process redesign — target 3-day close" --domain business
lope negotiate "R&D tax credit claim for FY2026 — scoping + documentation" --domain businessValidators check: reconciliation gaps, audit trail, control points, compliance, variance analysis, stakeholder sign-offs.
lope negotiate "Reorg engineering into 3 squads with clear ownership boundaries" --domain business
lope negotiate "Onboarding overhaul — first 90 days for new hires" --domain business
lope negotiate "Q1 OKR planning across 5 teams with cross-team dependencies" --domain businessValidators check: dependencies, stakeholder coverage, rollout risk, rollback plan, success metrics, communication plan.
lope negotiate "Systematic review of post-training RL techniques for small LMs" --domain research
lope negotiate "Ethnographic study of remote team collaboration — 12 week protocol" --domain research
lope negotiate "Replication study: attention-head pruning claims in recent paper" --domain researchValidators check: methodology rigor, sampling bias, reproducibility, ethical considerations, pre-registration, data management plan.
lope negotiate "Digital transformation scoping for retail client — 6 week discovery" --domain business
lope negotiate "Technology due diligence for $50M acquisition — 10 day turnaround" --domain business
lope negotiate "Strategic roadmap for CTO — 18 month technical strategy" --domain businessValidators check: client success criteria, stakeholder mapping, deliverable quality, timeline realism, scope boundaries.
lope negotiate "GDPR compliance audit for data pipeline — retention, SAR, deletion" --domain business
lope negotiate "SOC 2 Type II readiness sprint — 12 week preparation" --domain business
lope negotiate "Employee handbook redesign — remote-first policies" --domain businessValidators check: regulatory coverage, risk assessment, control mapping, documentation completeness, evidence of enforcement.
lope negotiate "Bootcamp curriculum redesign — full-stack 16 weeks" --domain business
lope negotiate "Internal engineering onboarding — first 30 days" --domain business
lope negotiate "Workshop: shipping your first LLM-powered feature — 3 hours" --domain businessValidators check: learning objectives, progression, assessment, practical exercises, prerequisites, time budgets.
lope negotiate "Migrate from Jenkins to GitHub Actions across 12 repos" --domain engineering
lope negotiate "Zero-downtime database migration from Postgres 13 to 16" --domain engineering
lope negotiate "Kubernetes cluster hardening — secrets, RBAC, network policies" --domain engineeringValidators check: blast radius, rollback plan, monitoring coverage, alerting, runbook completeness.
The validator ensemble doesn't care whether it's reviewing a Python diff or a Q4 marketing brief. What it cares about is: is the plan specific, does it have measurable criteria, is it complete, can the reviewer poke a hole in it? That's domain-agnostic. Lope's --domain switch tunes the validator's role prompt ("you are a senior marketing lead" vs "you are a senior systems engineer") and swaps the artifact labels (**Deliverables:** / **Success Metrics:** for business, **Files:** / **Tests:** for engineering, **Artifacts:** / **Validation Criteria:** for research). The verdict schema, the retry loop, the evidence gate, the caveman mode — all identical across domains.
See docs/samples.md for 8 end-to-end conversation walkthroughs that show the natural-language use pattern across all three domains.
Does Lope need API keys? No. Lope calls AI CLIs as subprocesses — each manages its own auth.
What if I only have one AI CLI installed? Works fine. You lose cross-model diversity but keep the structured sprint discipline.
What if validators disagree? Ensemble: majority wins. PASS vs NEEDS_FIX tie → NEEDS_FIX (conservative). Any FAIL vetoes.
Do I have to type /lope-negotiate every time?
No. Just describe what you want in natural language — "plan the auth refactor", "negotiate a Q4 campaign", "scope the data migration". Your AI agent recognizes the shape and runs lope negotiate for you. The using-lope auto-trigger skill installed alongside the slash commands handles the mapping. See docs/samples.md for 8 end-to-end walkthroughs.
Can I get lope to do this automatically for some tasks and not others?
Yes — that's the whole design. The using-lope skill's "When NOT to trigger" list is deliberately load-bearing: single-edit tasks, trivial ops, and pure conversation are skipped. Only consequential multi-phase work triggers lope. If you ever find lope firing when you didn't want it, the skill's trigger rules need tuning, not the agent.
Should I write a wrapper script around lope?
No. Lope is already a CLI. Just invoke lope <mode> <args> directly. No Python wrappers, no bash harnesses, no "lope_runner.sh". The whole point of the multi-CLI ensemble is that lope IS the harness — anything that wraps it is reinventing the thing you already have.
Can I use it in CI?
PYTHONPATH=~/.lope python3 -m lope execute SPRINT-FEATURE-X.md || exit 1Non-interactive environments auto-select defaults and never block on stdin.
git clone https://github.com/traylinx/lope.git
cd lope
./install
PYTHONPATH=. python3 -m lope version
PYTHONPATH=. python3 -m lope statusMain areas: new validators, better prompts, sprint doc format, CI/CD integrations.
Cutting a release? Follow docs/RELEASING.md. It has the full checklist, the SemVer rules lope uses, and the version-bumper script that keeps all 6 version strings in sync.
MIT. See LICENSE.
Built by Sebastian Schkudlara. Caveman mode adapted from JuliusBrussee/caveman.