Let your AI assistant consult other AI models for code review, debugging, planning, and consensus — without leaving its session.
Mesh is an MCP server for Claude Code, Codex CLI, and Gemini CLI. Plug it in and your assistant gains tools to call Gemini, GPT-5, Claude Opus, and 30+ other models for second opinions, structured workflows, and multi-model debate.
A typical session looks like:
You: "Codereview with gemini pro on the auth/ module, then continue
with o3 for a second pass, then planner to outline fixes."
Claude: → calls mesh codereview (gemini pro), gathers findings
→ continues with o3 for cross-check
→ calls mesh planner to produce a fix strategy
← surfaces a unified report with both perspectives
Mesh routes every model call through the local gemini or codex CLI when it can, and falls back to OpenRouter for anything else.
Forked from BeehiveInnovations/pal-mcp-server. Mesh strips the original's six direct API providers (Gemini/OpenAI/Azure/X.AI/DIAL/Custom) and routes everything through local CLIs + OpenRouter instead. See NOTICE for full attribution.
Why use Mesh:
- 🧠 Multi-model workflows — codereview, debug, planner, consensus, secaudit, and 13 more tools
- 🔒 Local-first — no external API call when your
geminiorcodexCLI can serve the request - 🔄 Resilient — automatic fallback Gemini CLI → Codex CLI → OpenRouter
- 📦 One install —
./setup.shconfigures everything; no per-provider API key juggling - 🪶 Lean — three backends total, no accumulated provider sprawl
Prerequisites: Python 3.11+, Git. At least one of:
- Gemini CLI —
npm i -g @google/gemini-cli && gemini login - Codex CLI —
npm i -g @openai/codex && codex login - OpenRouter API key (set as
OPENROUTER_API_KEYin.env)
Install:
git clone https://github.com/dgdev25/mesh.git
cd mesh
./setup.sh # idempotent: venv, deps, .env, Claude Code registrationUse it:
"Use mesh chat with gemini-2.5-pro to review this auth flow"
"Get consensus from gpt-5 and opus on whether to use Redis or Postgres"
"Use mesh debug with o3 to find the race condition"
MCP Client (Claude Code, Codex CLI, Gemini CLI)
↓
Mesh MCP Server
↓
Gemini CLI (gemini-* models)
Codex CLI (gpt-*, o3, o4 models)
OpenRouter HTTPS (everything else: opus, sonnet, deepseek, grok, …)
↓
Automatic fallback if a backend can't handle the request
All three backends produce identical ModelResponse objects, so tool code never has to care which one served the request.
# Optional CLI path overrides (defaults: search PATH for `gemini` and `codex`)
GEMINI_CLI_PATH=/usr/local/bin/gemini
CODEX_CLI_PATH=/usr/local/bin/codex
CLI_TIMEOUT_SECONDS=120
# OpenRouter fallback (required if neither CLI is installed)
OPENROUTER_API_KEY=sk-or-...
OPENROUTER_ALLOWED_MODELS= # empty = no restrictions
# Defaults
DEFAULT_MODEL=auto # 'auto' picks per task
DEFAULT_THINKING_MODE_THINKDEEP=high
DISABLED_TOOLS=analyze,refactor,testgen,secaudit,docgen,tracerFull reference: docs/configuration.md.
- 📖 Getting Started — installation, MCP client setup, verification
- ⚙️ Configuration — every environment variable, with defaults
- 🛠️ Troubleshooting — common issues and fixes
- 🧠 Advanced Usage — power-user workflows
- 📊 Model Ranking — how auto-mode picks models
- 🪟 WSL Setup — Windows users
- 🤝 Contributing — code standards, PR process
Each tool ships with its own multi-step workflow and parameters that consume context window space even when idle. Non-essential tools are disabled by default — toggle via
DISABLED_TOOLSin.env.
Collaboration & Planning (enabled by default)
clink— bridge to external CLIs (Gemini planner, Codex codereviewer, etc.)chat— brainstorm, get second opinions, validate approachesthinkdeep— extended reasoning, edge case analysisplanner— break down complex projects into actionable plansconsensus— multi-model debate with stance steering
Code Analysis & Quality (enabled by default)
debug— systematic root-cause analysisprecommit— validate changes before committingcodereview— professional reviews with severity levels
Development Tools (disabled by default)
analyze— architecture and dependency analysisrefactor— refactoring with decomposition focustestgen— test generation with edge casessecaudit— OWASP Top 10 (2025) security auditsdocgen— documentation generation with complexity analysistracer— call-flow mapping
Utilities
apilookup— current API/SDK lookups in a sub-processchallenge— prevent reflexive AI agreementlistmodels— show configured backends and available modelsversion— server version and capabilities
Multi-model code review (codereview takes one model per pass — chain via continuation):
"Run codereview with gemini pro on the auth/ directory, then continue with o3
for a second opinion, then use planner to outline a fix strategy"
Collaborative debugging (thinking_mode only applies to thinking-capable models — name one):
"Use debug with gemini pro and thinking_mode=max on this race condition,
then validate the fix with precommit"
Architecture planning (consensus takes multiple models with stances):
"Use planner to break down our microservices migration, then run consensus
with sonnet supporting the proposal and o3 opposing it"
See docs/advanced-usage.md for more.
pytest tests/ # unit + integration (mocked)
MESH_RUN_CLI_TESTS=1 pytest tests/test_cli_integration.py # real subprocess against gemini/codex
python -m simulator_tests --quick # end-to-end MCP scenariosCurrent status: 595 unit tests passing.
Apache License 2.0 — see LICENSE for the full text and NOTICE for upstream attribution.
- BeehiveInnovations/pal-mcp-server — the upstream project this is forked from
- Model Context Protocol
- Claude Code
- Codex CLI
- Gemini CLI
- OpenRouter