Get a second (and third, and fourth) opinion on your implementation plan before you write a line of code. debate sends your plan to multiple AI reviewers in parallel, synthesizes their feedback, has them argue out contradictions, and produces a consensus verdict.
v2 is a ground-up rewrite. Everything now runs through acpx — a single unified CLI that talks to any coding agent. No more managing individual CLIs, session files, or API keys per provider. One config, any combination of models.
# 1. Install the plugin
/plugin marketplace add STRML/cc-debate
/plugin install debate@cc-debate
# 2. Install acpx
npm install -g acpx@latest
# 3. Configure your review panel (interactive)
/debate:acpx-setup
# 4. Run a review
/debate:allRestart Claude Code after installing the plugin.
You: /debate:all
Claude: Running parallel review via acpx...
codex → agent: codex (120s)
gemini → agent: gemini (240s)
mercury → agent: mercury (120s)
## Codex Review — Round 1
The retry logic in Step 4 doesn't handle the case where...
VERDICT: REVISE
## Gemini Review — Round 1
Missing error handling when the API is unavailable...
VERDICT: REVISE
## Mercury Review — Round 1
Unstated assumption: this plan assumes the temp directory is writable...
VERDICT: REVISE
## Synthesis
Unanimous: all reviewers flagged missing error handling
Unique to Codex: retry logic gap in Step 4
Unique to Mercury: temp directory writability assumption
## Final Report
VERDICT: REVISE — 3 issues to address
Claude: Revising plan...
Claude: Re-submitting to all reviewers...
## Codex Review — Round 2 → VERDICT: APPROVED ✅
## Gemini Review — Round 2 → VERDICT: APPROVED ✅
## Mercury Review — Round 2 → VERDICT: APPROVED ✅
VERDICT: APPROVED — unanimous after 2 rounds
Required:
- acpx —
npm install -g acpx@latest - jq —
brew install jq/apt install jq - The agent CLIs for whatever reviewers you want (see Supported Agents below)
Optional:
- opencode — only needed for OpenRouter model access (DeepSeek, Mercury, Kimi, etc.) or LiteLLM proxy access (local models, self-hosted, etc.)
These are the reviewer backends you can use. Mix and match — pick 2-4 for a useful review panel.
These have native Agent Client Protocol support. Install the CLI, and acpx handles the rest.
| Agent name | Model | Install |
|---|---|---|
codex |
OpenAI Codex | npm install -g @openai/codex + OPENAI_API_KEY |
gemini |
Google Gemini 2.x/3.x | npm install -g @google/gemini-cli + gemini auth + GEMINI_API_KEY ¹ |
claude |
Claude (Opus/Sonnet) | Already installed — you're running it now |
kimi |
Kimi (Moonshot AI) | See Kimi CLI docs |
kiro |
Kiro (AWS) | See Kiro docs |
qwen |
Qwen Code | See Qwen Code docs |
cursor |
Cursor | Install Cursor IDE |
copilot |
GitHub Copilot | gh extension install github/gh-copilot |
opencode |
OpenCode (default model) | npm install -g opencode-ai |
kilocode |
Kilocode | npx @kilocode/cli |
droid |
Factory Droid | See acpx docs |
iflow |
iFlow | See acpx docs |
pi |
Pi Coding Agent | See acpx docs |
openclaw |
OpenClaw | See acpx docs |
¹ Gemini note: The Gemini CLI's stored OAuth works for direct CLI use but not for acpx's non-interactive subprocess mode. You need a separate API key. Get one free at aistudio.google.com/apikey, then add it to
~/.claude/settings.json:"env": { "GEMINI_API_KEY": "AIza..." }
Claude note: Using
claudeas a reviewer means Claude reviewing its own plan — useful for a fresh-context skeptical read, but not truly independent. For independent perspectives, use non-Claude agents.invoke-acpx.shautomatically handles the nested-session guard (CLAUDECODE) required to run Claude as a subprocess.
For models that don't have a native acpx agent — DeepSeek, Mercury, Mixtral, Kimi K2, GPT variants, or anything else on openrouter.ai — you can route through OpenRouter using opencode as the bridge:
acpx → opencode (custom agent) → OpenRouter API → any model
Prerequisites:
npm install -g opencode-ai- OpenRouter API key from openrouter.ai/settings/keys
Setup (one time per model):
Tip: Just run
/debate:acpx-setup— it does all of this for you interactively.
Or manually:
# 1. Create a wrapper directory
mkdir -p ~/.acpx/agents/mercury
# 2. Write the opencode config
cat > ~/.acpx/agents/mercury/.opencode.json << 'EOF'
{
"provider": {
"openrouter": { "apiKey": "sk-or-v1-..." }
},
"agents": {
"coder": { "model": "openrouter/inception/mercury-2" }
}
}
EOF
chmod 600 ~/.acpx/agents/mercury/.opencode.json
# 3. Write the launch script
cat > ~/.acpx/agents/mercury/start.sh << 'EOF'
#!/bin/bash
export OPENCODE_CONFIG_CONTENT='{"model":"openrouter/inception/mercury-2"}'
exec opencode acp "$@"
EOF
chmod +x ~/.acpx/agents/mercury/start.sh
# 4. Register with acpx (create/merge into ~/.acpx/config.json)
# Add this entry:
# { "agents": { "mercury": { "command": "/Users/you/.acpx/agents/mercury/start.sh" } } }Then add to ~/.claude/debate-acpx.json:
"mercury": {
"agent": "mercury",
"timeout": 120,
"model_id": "inception/mercury-2",
"system_prompt": "You are The Contrarian..."
}Popular OpenRouter models to consider:
| Model | OpenRouter ID | Notes |
|---|---|---|
| DeepSeek R1 | deepseek/deepseek-r1 |
Strong reasoning |
| Inception Mercury | inception/mercury-2 |
Fast, strong coder |
| Kimi K2.5 | moonshotai/kimi-k2.5 |
1M context |
| Mistral Large | mistralai/mistral-large |
Good architecture instincts |
| GPT-4.1 | openai/gpt-4.1 |
Broad coverage |
| Gemini 2.5 Pro | google/gemini-2.5-pro |
Strong if you don't have Gemini CLI |
For local models (Ollama, LM Studio), self-hosted endpoints, or any provider that LiteLLM supports, you can route through a LiteLLM proxy using opencode as the bridge:
acpx → opencode (custom agent) → LiteLLM proxy → any model
Prerequisites:
npm install -g opencode-ai- A running LiteLLM proxy (
pip install litellm[proxy]+litellm --config config.yaml)
Model alias requirement:
opencode resolves model IDs against its built-in OpenAI model list, so the model name you give it must be a known OpenAI model name (e.g. gpt-4o-mini). Configure LiteLLM to route that alias to your actual model:
# LiteLLM config.yaml
model_list:
- model_name: gpt-4o-mini # alias opencode uses
litellm_params:
model: ollama/deepseek-r1 # your actual model
api_base: http://localhost:11434Setup (one time per agent):
Tip: Just run
/debate:acpx-setup— it does all of this for you interactively.
Or manually:
# Run the helper script
bash ~/.claude/debate-scripts/create-litellm-agent.sh \
deepseek \
http://localhost:8200/v1 \
gpt-4o-mini \
sk-litellm-optional-keyThen add to ~/.claude/debate-acpx.json:
"deepseek": {
"agent": "deepseek",
"timeout": 120,
"model_id": "deepseek-r1 via LiteLLM",
"system_prompt": "You are The Pragmatist..."
}Arguments to create-litellm-agent.sh:
| Arg | Required | Example |
|---|---|---|
name |
Yes | deepseek |
base_url |
Yes | http://localhost:8200/v1 |
model_alias |
Yes | gpt-4o-mini (must be a known OpenAI model name) |
api_key |
No | sk-litellm-abc123 (omit if proxy has no auth) |
Reviewers live in ~/.claude/debate-acpx.json. This is the only file you need to edit to change your panel.
{
"reviewers": {
"codex": {
"agent": "codex",
"timeout": 120,
"system_prompt": "You are The Executor — find what breaks at runtime. Focus on shell correctness, exit codes, race conditions, file I/O."
},
"gemini": {
"agent": "gemini",
"timeout": 240,
"system_prompt": "You are The Architect — review for structural integrity. Focus on approach validity, over-engineering, missing phases, graceful degradation."
},
"mercury": {
"agent": "mercury",
"timeout": 120,
"model_id": "inception/mercury-2",
"system_prompt": "You are The Contrarian — question everything. Focus on hidden assumptions, overlooked alternatives, failure modes under load."
}
}
}| Field | Required | Description |
|---|---|---|
agent |
Yes | acpx agent name (see Supported Agents) |
timeout |
No | Seconds before the review is killed. Default: 120. Use 240-300 for large/slow agents. |
system_prompt |
No | Persona sent as the prompt prefix. Omit for generic reviewer behavior. |
model_id |
No | For OpenRouter agents — the underlying model ID (e.g. inception/mercury-2). Shown in the summary. |
The value of multiple reviewers is getting genuinely different lenses. Some ideas:
- The Executor — shell correctness, exit codes, race conditions, file I/O, command availability
- The Architect — structural integrity, approach validity, over-engineering, missing phases, graceful degradation
- The Skeptic — unstated assumptions, unhappy paths, second-order failures, security
- The Contrarian — questions conventional wisdom, hidden assumptions, alternatives everyone overlooks, failure modes under load
- The Pragmatist — what will actually ship, unnecessary complexity, missing happy path steps, places that assume competence that may not exist
| Command | What it does |
|---|---|
/debate:setup |
Check prerequisites, create ~/.claude/debate-scripts symlink, detect v1.x configs and migrate, print permission allowlist |
/debate:acpx-setup |
Interactive reviewer configuration: pick agents, set up OpenRouter models, probe connectivity |
/debate:all [reviewers] [skip-debate] |
Run all (or specific) reviewers in parallel, synthesize, debate, iterate up to 3 rounds |
/debate:claude-review |
Claude review — single Skeptic by default, Opus model. Up to 5 rounds. |
/debate:claude-double-review |
Two Claude reviewers (Skeptic + Architect) in parallel. |
/debate:claude-custom-review |
Interactive picker — choose personalities and model (opus/sonnet). |
/debate:all # all configured reviewers
/debate:all codex,mercury # specific subset only
/debate:all skip-debate # skip debate phase, straight to final reportAdd to ~/.claude/settings.json to permanently approve all debate tool calls:
{
"permissions": {
"allow": [
"Bash(bash ~/.claude/debate-scripts/debate-setup.sh:*)",
"Bash(bash ~/.claude/debate-scripts/run-parallel-acpx.sh:*)",
"Bash(bash ~/.claude/debate-scripts/invoke-acpx.sh:*)",
"Bash(rm -rf .tmp/ai-review-:*)",
"Read(.tmp/ai-review*)",
"Edit(.tmp/ai-review*)",
"Write(.tmp/ai-review*)"
]
}
}Run /debate:setup to print this snippet with verified paths.
Reviewer fails immediately with "No acpx session found"
For custom agents (OpenRouter via opencode), create a session first: acpx <agent> sessions new. /debate:acpx-setup does this automatically during the probe step.
Gemini fails with auth error
GEMINI_API_KEY is not set. Get a free key at aistudio.google.com/apikey and add it to ~/.claude/settings.json under "env". Restart Claude Code.
OpenRouter model returns wrong answers / ignores persona
The OPENCODE_CONFIG_CONTENT env var may not be taking effect. Verify your start.sh exports it correctly and that the model ID matches what's on openrouter.ai/models exactly.
Reviews time out
Increase the timeout value for that reviewer in ~/.claude/debate-acpx.json. Large models (DeepSeek R1, Gemini 2.5 Pro) often need 240-300s. The parallel runner automatically sets MAX_WAIT = max(timeout) + 60s.
timeout: command not found warning
Install GNU coreutils: brew install coreutils (macOS). Reviews still run without it — the per-reviewer hard kill just won't be enforced.
If you're upgrading from v1.x (CLI mode, LiteLLM, or OpenRouter), see MIGRATING.md for:
- Which commands were removed and what replaces them
- How to convert
debate-litellm.jsonanddebate-openrouter.jsonto the new format - Which
settings.jsonpermission patterns to remove and add
The /debate:setup command also detects v1.x configs automatically and offers to migrate them.
- Plan content is passed via file path — never inlined in shell strings
- AI output (reviews, summaries) is written to temp files — never interpolated into shell commands
- acpx is invoked with
--approve-reads— agents can read your codebase for context but cannot write files - Work directories in
.tmp/ai-review-*are deleted at the end of each review session - OpenRouter API keys live in
~/.acpx/agents/<name>/.opencode.jsonwithchmod 600
See CHANGELOG.md for release history.
MIT