Skip to content

dgdev25/mesh

Repository files navigation

Mesh MCP

Let your AI assistant consult other AI models for code review, debugging, planning, and consensus — without leaving its session.

Mesh is an MCP server for Claude Code, Codex CLI, and Gemini CLI. Plug it in and your assistant gains tools to call Gemini, GPT-5, Claude Opus, and 30+ other models for second opinions, structured workflows, and multi-model debate.

A typical session looks like:

You:    "Codereview with gemini pro on the auth/ module, then continue
         with o3 for a second pass, then planner to outline fixes."
Claude: → calls mesh codereview (gemini pro), gathers findings
        → continues with o3 for cross-check
        → calls mesh planner to produce a fix strategy
        ← surfaces a unified report with both perspectives

Mesh routes every model call through the local gemini or codex CLI when it can, and falls back to OpenRouter for anything else.

Forked from BeehiveInnovations/pal-mcp-server. Mesh strips the original's six direct API providers (Gemini/OpenAI/Azure/X.AI/DIAL/Custom) and routes everything through local CLIs + OpenRouter instead. See NOTICE for full attribution.

Why use Mesh:

  • 🧠 Multi-model workflows — codereview, debug, planner, consensus, secaudit, and 13 more tools
  • 🔒 Local-first — no external API call when your gemini or codex CLI can serve the request
  • 🔄 Resilient — automatic fallback Gemini CLI → Codex CLI → OpenRouter
  • 📦 One install./setup.sh configures everything; no per-provider API key juggling
  • 🪶 Lean — three backends total, no accumulated provider sprawl

Quick Start

Prerequisites: Python 3.11+, Git. At least one of:

  • Gemini CLInpm i -g @google/gemini-cli && gemini login
  • Codex CLInpm i -g @openai/codex && codex login
  • OpenRouter API key (set as OPENROUTER_API_KEY in .env)

Install:

git clone https://github.com/dgdev25/mesh.git
cd mesh
./setup.sh             # idempotent: venv, deps, .env, Claude Code registration

Use it:

"Use mesh chat with gemini-2.5-pro to review this auth flow"
"Get consensus from gpt-5 and opus on whether to use Redis or Postgres"
"Use mesh debug with o3 to find the race condition"

How It Works

MCP Client (Claude Code, Codex CLI, Gemini CLI)
    ↓
Mesh MCP Server
    ↓
Gemini CLI       (gemini-* models)
Codex CLI        (gpt-*, o3, o4 models)
OpenRouter HTTPS (everything else: opus, sonnet, deepseek, grok, …)
    ↓
Automatic fallback if a backend can't handle the request

All three backends produce identical ModelResponse objects, so tool code never has to care which one served the request.


Configuration

# Optional CLI path overrides (defaults: search PATH for `gemini` and `codex`)
GEMINI_CLI_PATH=/usr/local/bin/gemini
CODEX_CLI_PATH=/usr/local/bin/codex
CLI_TIMEOUT_SECONDS=120

# OpenRouter fallback (required if neither CLI is installed)
OPENROUTER_API_KEY=sk-or-...
OPENROUTER_ALLOWED_MODELS=                    # empty = no restrictions

# Defaults
DEFAULT_MODEL=auto                            # 'auto' picks per task
DEFAULT_THINKING_MODE_THINKDEEP=high
DISABLED_TOOLS=analyze,refactor,testgen,secaudit,docgen,tracer

Full reference: docs/configuration.md.


Documentation


Core Tools

Each tool ships with its own multi-step workflow and parameters that consume context window space even when idle. Non-essential tools are disabled by default — toggle via DISABLED_TOOLS in .env.

Collaboration & Planning (enabled by default)

  • clink — bridge to external CLIs (Gemini planner, Codex codereviewer, etc.)
  • chat — brainstorm, get second opinions, validate approaches
  • thinkdeep — extended reasoning, edge case analysis
  • planner — break down complex projects into actionable plans
  • consensus — multi-model debate with stance steering

Code Analysis & Quality (enabled by default)

  • debug — systematic root-cause analysis
  • precommit — validate changes before committing
  • codereview — professional reviews with severity levels

Development Tools (disabled by default)

  • analyze — architecture and dependency analysis
  • refactor — refactoring with decomposition focus
  • testgen — test generation with edge cases
  • secaudit — OWASP Top 10 (2025) security audits
  • docgen — documentation generation with complexity analysis
  • tracer — call-flow mapping

Utilities

  • apilookup — current API/SDK lookups in a sub-process
  • challenge — prevent reflexive AI agreement
  • listmodels — show configured backends and available models
  • version — server version and capabilities

Example Workflows

Multi-model code review (codereview takes one model per pass — chain via continuation):

"Run codereview with gemini pro on the auth/ directory, then continue with o3
 for a second opinion, then use planner to outline a fix strategy"

Collaborative debugging (thinking_mode only applies to thinking-capable models — name one):

"Use debug with gemini pro and thinking_mode=max on this race condition,
 then validate the fix with precommit"

Architecture planning (consensus takes multiple models with stances):

"Use planner to break down our microservices migration, then run consensus
 with sonnet supporting the proposal and o3 opposing it"

See docs/advanced-usage.md for more.


Testing

pytest tests/                                              # unit + integration (mocked)
MESH_RUN_CLI_TESTS=1 pytest tests/test_cli_integration.py  # real subprocess against gemini/codex
python -m simulator_tests --quick                          # end-to-end MCP scenarios

Current status: 595 unit tests passing.


License

Apache License 2.0 — see LICENSE for the full text and NOTICE for upstream attribution.

Acknowledgments

Star History Chart

About

Lets your AI assistant consult other AI models for code review, debugging, planning, and consensus — without leaving its session. MCP server for Claude Code, Codex CLI, and Gemini CLI.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors