Any LLM. Any IDE. Zero infrastructure.
Claude Code in one terminal. Codex in another. Cursor in a third. They're all working on your project — but they can't coordinate. Until now.
$ am route "Build a todo app. @codex write the API. @cursor build the UI. @claude review both."
✓ task-f465 → @codex: "Write the API"
✓ task-2273 → @cursor: "Build the UI"
✓ task-19e6 → @claude: "Review both"
3 tasks created. Instructions delivered. Agents coordinating.
You're running 3 AI agents right now. Claude Code is refactoring the backend. Codex is writing tests. Cursor is fixing the frontend.
They all have access to the same codebase, but:
- They can't talk to each other. Claude doesn't know Codex just changed the API schema.
- They overwrite each other's work. Cursor edits a file that Claude is also editing.
- They make contradictory decisions. One picks PostgreSQL, another writes SQLite queries.
- They duplicate effort. Two agents independently write the same utility function.
Every multi-agent tool today either locks you into one LLM (Claude Teams, Codex fan-out) or requires a central server (A2A, ACP, BeadHub). Nothing coordinates across Claude + Codex + Cursor + Windsurf + local LLMs with zero setup.
AgentMesh is a filesystem-based coordination protocol. Drop .agent-mesh/ in any project and every AI agent — regardless of provider or IDE — can read tasks, send messages, claim work, and coordinate.
No API server. No SDK per IDE. No vendor lock-in. Just JSON files.
Every AI coding agent can read and write files. That's the universal interface. AgentMesh turns the filesystem into a coordination layer.
npm install -g agent-mesh
cd my-project
am init
# Register agents (run in each IDE/terminal)
am register claude-code architect
am register codex builder
am register cursor specialist
# Route work with @mentions
am route "Build auth. @codex write the API. @cursor build the login UI. @claude review security."Each agent gets a task file with full context about what others are doing. They coordinate through the shared filesystem.
When you run am register, AgentMesh automatically injects instructions into the agent's native config file. No need to manually tell each agent about the mesh:
| Agent | Auto-created file | How agent discovers it |
|---|---|---|
| Claude Code | CLAUDE.md |
Read on every conversation start |
| Codex CLI | .codex/instructions.md |
Read on every task |
| Cursor | .cursor/rules/agent-mesh.mdc |
Loaded as always-on rule |
| Windsurf | .windsurfrules |
Read on startup |
| Copilot | .github/copilot-instructions.md |
Read on startup |
| Cline | .clinerules |
Read on startup |
| Aider | .aider.conf.yml |
read: directive for PROTOCOL.md |
| Other | .agent-mesh/AGENT_INSTRUCTIONS.md |
Point your agent here |
For Claude Code and Cursor, am register also auto-installs the MCP server — giving them native mesh tools in their chat interface.
┌──────────────────────────────────────────────────────────────┐
│ Your Project │
│ │
│ ┌──────────┐ ┌────────────────────┐ ┌──────────┐ │
│ │ Claude │ │ .agent-mesh/ │ │ Codex │ │
│ │ Code │◄───►│ │◄───►│ CLI │ │
│ │ │ │ agents/ │ │ │ │
│ └──────────┘ │ tasks/ │ └──────────┘ │
│ │ messages/ │ │
│ ┌──────────┐ │ context/ │ ┌──────────┐ │
│ │ Cursor │◄───►│ artifacts/ │◄───►│ Local │ │
│ │ IDE │ │ PROTOCOL.md │ │ LLM │ │
│ └──────────┘ └────────────────────┘ └──────────┘ │
│ ▲ │
│ ┌──────────────┐ │
│ │ Daemon │ (optional) │
│ │ WebSocket │ real-time events │
│ └──────────────┘ │
└──────────────────────────────────────────────────────────────┘
| Directory | Purpose |
|---|---|
agents/ |
Agent registry — who's in the mesh, their role, capabilities |
tasks/ |
Shared task board — create, claim, complete tasks |
messages/ |
Inter-agent messages — questions, answers, handoffs, alerts |
context/ |
Shared knowledge — decisions log, shared variables, file locks |
artifacts/ |
Shared outputs — routing plans, code snippets, research |
PROTOCOL.md |
Self-describing spec — any new agent reads this to participate |
Every AI coding agent can read and write files. That's the one thing Claude Code, Codex, Cursor, Copilot, Windsurf, Devin, Aider, Cline, and every local LLM have in common.
- Zero integration cost — no SDK, no API, no plugin per IDE
- Works offline — no server, no network dependency
- Git-friendly — commit and share coordination state
- Inspectable —
catany JSON file to see what's happening - Crash-resilient — agent crashes, state persists on disk
Route work to agents using natural @mentions:
am route "Build e-commerce. @codex backend API. @cursor React storefront. @claude database design."
am route "@cursor fix the broken CSS on mobile --priority critical"
am route plan "Refactor payments. @codex Stripe. @cursor checkout UI." # dry runEach agent receives a task + message with full context about what others are doing.
am task create "Build user authentication"
am task list
am task list --status open
am task claim task-abc123 agent-xyz
am task update task-abc123 "Login endpoint done"
am task complete task-abc123 "All endpoints tested"am msg send codex "API schema is ready"
am msg send cursor "Don't touch header.tsx, I'm refactoring it" --type alert
am msg broadcast "Switching to TypeScript strict mode"
am msg read --unread
am msg decide "Use PostgreSQL" --context "Need JSONB support"Agents have limited context windows. AgentMesh compresses coordination state to fit any budget:
am context summary # full context (~400 tokens)
am context brief --tokens 500 # compressed to fit budget
am context tokens # cost breakdown per modelMeasured token savings (5 agents, 8 tasks, 6 messages, 3 decisions):
| Mode | Tokens | Savings |
|---|---|---|
| Raw JSON (no AgentMesh) | ~1,700 | — |
| Full summary | ~400 | 76% reduction |
| Budgeted brief | ~220 | 87% reduction |
Context window usage with 8 agents and 15 active tasks:
| Model | Usage |
|---|---|
| Claude 4 (200k) | 0.23% |
| GPT-4o (128k) | 0.36% |
| Codex (192k) | 0.24% |
| Gemini 2.5 (1M) | 0.05% |
am watch start # monitor file changes
am watch status # see active locks
am watch report # hotspot detectionNative tool support in Claude Code and Cursor via Model Context Protocol:
am mcp install # auto-configures8 tools available: mesh_status, mesh_route, mesh_send, mesh_read, mesh_tasks, mesh_init, mesh_register, mesh_decide
am daemon # WebSocket on ws://localhost:4200File watcher + WebSocket server for instant notifications when tasks change.
AgentMesh auto-detects capabilities and environment for every major AI coding agent:
| Agent | Environment | Auto-detected Capabilities |
|---|---|---|
| Claude Code | Terminal/CLI | full-stack, architecture, code-review, debugging |
| OpenAI Codex | Terminal/CLI | code-generation, refactoring, tests, fan-out |
| Cursor | Cursor IDE | frontend, ui, rapid-iteration, multi-agent |
| Windsurf | Windsurf IDE | full-stack, multi-file, autonomous |
| GitHub Copilot | VS Code/JetBrains | autocomplete, snippets, pr-review |
| Devin | CLI/Cloud | autonomous, full-stack, deployment |
| Aider | Terminal/CLI | git, code-editing, pair-programming |
| Cline | VS Code | full-stack, autonomous, browser, terminal |
| Continue | VS Code | autocomplete, chat, multi-model |
| Bolt | Browser | frontend, react, rapid-prototype |
| Lovable | Browser | frontend, design-to-code |
| Replit | Browser/IDE | full-stack, deployment, collaboration |
| v0 | Browser | frontend, ui, react, design-to-code |
| Gemini | CLI/Cloud | full-stack, code-generation, multi-modal |
| Qwen | CLI | code-generation, multilingual |
| Ollama | Local | local, privacy, code-generation |
| LM Studio | Local | local, privacy, fine-tuning |
| Tabby | Self-hosted | autocomplete, self-hosted, code-completion |
| Sourcegraph Cody | VS Code/Web | code-search, refactoring, codebase-wide |
Register with auto-detection:
am register claude-code architect # auto-detects capabilities
am register my-custom-bot builder # defaults to "general"No CLI needed. Any agent that can read/write files just follows .agent-mesh/PROTOCOL.md:
1. Register — Write JSON to .agent-mesh/agents/:
{ "id": "agent-xxx", "name": "my-agent", "role": "builder", "status": "active" }2. Check tasks — Read .agent-mesh/tasks/, look for "status": "open".
3. Communicate — Write JSON to .agent-mesh/messages/.
The protocol is self-describing. Point any agent at PROTOCOL.md and it knows what to do.
import { AgentMesh } from 'agent-mesh';
const mesh = new AgentMesh();
const agent = await mesh.register('my-bot', 'builder');
const task = mesh.createTask('Build login page', { priority: 'high' });
mesh.sendMessage('codex-id', 'Schema updated, regenerate types');
mesh.broadcast('Deploying to staging');
mesh.logDecision('Use PostgreSQL', 'Need JSONB support');Simulated 5 agents working on 12 tasks across 18 project files, averaged over 5 runs:
| Metric | Without AgentMesh | With AgentMesh |
|---|---|---|
| File conflicts | 9 | 0 |
| Silent overwrites | 9 | 0 |
| Duplicate work | 8 | 0 |
| Dependency violations | 4 | 0 |
| Decision conflicts | 2 | 0 |
| Work efficiency | 75% | 100% |
Tested 6 agents from different IDEs writing to the same .agent-mesh/:
| Metric | Result |
|---|---|
| Agents registered (6 IDEs) | 6/6 |
| Tasks visible across all | 6/6 |
| Messages delivered cross-IDE | 6/6 |
| Protocol overhead | 4.2 KB |
105 automated tests across 5 suites:
| Suite | Tests | What It Proves |
|---|---|---|
| Full flow | 26 | CLI commands, registration, routing, messaging |
| Security | 21 | Path traversal, injection, atomic writes, validation |
| Real-world simulation | 39 | 8 agents, 6 IDEs, e-commerce project coordination |
| Token savings + proof | 8 | Token reduction, conflict elimination, dependency order |
| Auto-config injection | 11 | Config files created for 8 agent types, append/update/no-overwrite |
| Feature | AgentMesh | A2A (Google) | MCP | ACP (IBM) | Claude Teams | Codex Fan-out |
|---|---|---|---|---|---|---|
| Cross-LLM | Yes | Yes | No | Yes | No | No |
| Cross-IDE | Yes | N/A | Partial | N/A | No | No |
| Zero infrastructure | Yes | No (HTTP) | No (server) | No (server) | No | No |
| @mention routing | Yes | No | No | No | No | No |
| Token awareness | Yes | No | No | No | No | No |
| File conflict detection | Yes | No | No | No | No | No |
| Works offline | Yes | No | No | No | No | No |
| Self-describing protocol | Yes | Yes | Partial | Yes | No | No |
Key differentiator: AgentMesh is the only tool that bridges agents across different IDEs and LLMs. Claude Teams only coordinates Claude agents. Codex fan-out only orchestrates Codex workers. Cursor multi-agent only works within Cursor. AgentMesh connects all of them through the one thing they all share: the filesystem.
// .agent-mesh/mesh.json
{
"version": "0.1.0",
"project": "my-project",
"settings": {
"daemon_port": 4200,
"heartbeat_interval": 30000,
"task_timeout": 300000,
"max_messages_per_agent": 100
}
}- Input validation — all IDs and names validated against strict regex
- Path traversal protection —
safePath()prevents directory escape - Atomic writes — temp file + rename prevents corruption
- Localhost-only daemon — WebSocket binds to
127.0.0.1 - No command injection —
execFileSyncinstead ofexecSync
git clone https://github.com/CodeAheadDev/agent-mesh.git
cd agent-mesh
npm install
node --test tests/- Agent hooks — custom scripts on mesh events
- Remote mesh — sync across machines via git
- Web dashboard — browser monitoring UI
- A2A bridge — Google Agent2Agent protocol interop
- Voting system — consensus-based decisions
- Auto-summarization — compress old context
- Files are the universal interface. Every agent can read/write files.
- Protocol over platform. The spec matters more than the tool.
- Zero infrastructure. No server = more adoption.
- Inspectable by default.
catany JSON file. - Agents are peers. No hierarchy.
Built by CodeAhead