Agent-agnostic communication layer for AI agents. If it has a terminal prompt, it's on the mesh. Self-hosted, lightweight HTTP message broker that lets any AI agent send tasks to any other agent — across machines, across tools, zero code changes.
Example: You have Claude Code on your laptop and Kiro on your VPS. You want Claude to ask Kiro to deploy code. meshterm makes that possible — no code changes to either agent.
⚠️ Requires Bun runtime. Install it first:curl -fsSL https://bun.sh/install | bash
┌──────────────┐ ┌──────────────┐
│ Claude Code │ │ │
│ ↕ stdio │ │ Mesh Server │
│ MCP server │ ─── HTTPS + API key ──→ │ (HTTP) │
│ (local) │ │ │
└──────────────┘ │ Messages │
│ Rooms │
┌──────────────┐ │ Roles │
│ Kiro CLI │ │ Agents │
│ ↕ stdio │ │ │
│ MCP server │ ─── HTTPS + API key ──→ └──────┬───────┘
│ (local) │ │
└──────────────┘ ┌──────┴───────┐
│ Any agent │
┌──────────────┐ │ (direct HTTP)│
│ Any TUI agent│ └──────────────┘
│ ↕ tmux │
│ daemon │ ─── HTTPS + API key ──→
│ (background) │
└──────────────┘
Every agent can send messages — just call the API (via MCP tool, CLI, or HTTP).
Receiving depends on the agent type:
| Agent Type | Receive Method | How | Real-time? |
|---|---|---|---|
| MCP agent (Kiro, Claude, Cursor) | Agent polls | Agent calls mesh_poll MCP tool |
|
| CLI in tmux | Daemon push | meshterm daemon start injects via tmux send-keys |
✅ Yes |
| OpenClaw | Webhook push | Server POSTs to OpenClaw webhook → triggers heartbeat | ✅ Yes |
| Any HTTP client | Poll API | GET /messages/:agent?unread=true |
In short:
- MCP agents get tools to send freely, but only receive when they actively poll
- tmux agents get messages injected automatically by the daemon
- OpenClaw agents get messages pushed via webhook
- Any agent can always poll the REST API directly
Known limitation: MCP agents (Kiro, Claude, Cursor) cannot receive messages in real-time. They must call
mesh_pollto check for new messages. If no one polls, messages sit unread. The roadmap includes WebSocket push to fix this.
npm install -g meshtermRun this on the machine that will be your central hub — a VPS for cross-network setups, or your laptop if all agents are local.
meshterm server start --port 4200 --secret your-secretOr with environment variables:
MESH_PORT=4200 MESH_SECRET=your-secret meshterm server startmeshterm init --server http://localhost:4200 --key your-secret --agent my-agentThis creates ~/.meshterm/config.json. Run this on every machine that connects to the mesh.
meshterm send my-agent "hello from the mesh"
meshterm pollThat's it. Your agents can now talk to each other.
# Terminal 1: Start the server
meshterm server start --secret demo-secret
# Terminal 2: Configure and test
meshterm init --server http://localhost:4200 --key demo-secret --agent my-agent
meshterm send my-agent "hello from the mesh"
meshterm poll
# → 📨 my-agent: hello from the meshThat's it — message sent, stored, and retrieved. In a real setup, you'd have agents on different machines (or different tmux sessions) talking to each other.
Multiple agents on one machine:
meshterm agent start --name alice --cli "kiro-cli chat" --session alice
meshterm agent start --name bob --cli "kiro-cli chat" --session bob
# Now alice and bob are in separate tmux sessions with their own mesh-client daemons.
# Send from anywhere:
meshterm send alice "review my PR"
# → Message injected into alice's tmux session automaticallyThere are two types of agents. Pick the one that matches your setup:
Your agent runs inside an IDE. It sends via MCP tools and receives by polling.
meshterm setup kiro
# Also supports: claude, cursor, copilot, geminiThis creates:
- MCP config — adds meshterm tools to your IDE (
~/.kiro/settings/mcp.json, etc.) - Steering file — teaches the agent how to handle
[mesh:...]messages
After setup, restart your IDE. The agent gets tools like mesh_send, mesh_poll, mesh_read.
Note: IDE agents can only receive messages when they actively poll (
mesh_poll). They don't get messages pushed to them automatically.
Your agent runs in a terminal. It sends via MCP or CLI and receives messages pushed into its tmux session automatically.
# One command: creates tmux session + starts your CLI + starts message daemon
meshterm agent start --name my-agent --cli "kiro-cli chat" --session my-agent
# Attach to see it
meshterm agent attach --name my-agent
# Detach: Ctrl+B then DThe daemon polls the mesh every 5 seconds and injects new messages into the tmux pane via tmux send-keys.
Managing terminal agents:
meshterm agent list # see running agents
meshterm agent attach --name my-agent # attach to tmux session
meshterm agent stop --name my-agent # stop agent + daemon
meshterm agent stop --name my-agent --kill-session # also kill tmux sessionAlready have a tmux session running? Don't use
agent start— it would type the CLI command into your existing session. Use the daemon directly:meshterm daemon start --agent my-agent --session my-session
Your agent receives messages via HTTP webhook push — instant delivery, no polling.
Configure in mesh-config.json (place next to the server):
{
"webhooks": {
"my-agent": {
"url": "https://your-webhook-url",
"token": "your-token",
"format": "raw"
}
}
}Built-in formats: raw, openclaw, slack, discord, custom (with {{from}}, {{to}}, {{body}} templates).
Or via environment variable: MESH_WEBHOOKS="agent|url|token" meshterm server start
meshterm send agent-1 "refactor the auth module"
meshterm poll # check for repliesmeshterm room create planning --members agent-1,agent-2,agent-3 --mode free-form
meshterm room send planning "Let's discuss the architecture"
meshterm room history planningRoom modes: free-form (anyone speaks), round-robin (take turns), reactive (respond when relevant), moderated (moderator controls flow).
Route messages to the best available agent by role instead of by name:
meshterm role create coder \
--agents agent-1,agent-2 \
--priority agent-1,agent-2 \
--fallback queue
meshterm send role:coder "fix the login bug"
meshterm send role:coder --broadcast "pull latest and rebuild"Routing logic:
- Check which agents in the role are online (heartbeat < 30s)
- Pick the highest-priority online agent
- If none online:
queue(deliver when one comes online) orreject(return error)
When connected via MCP, agents get these tools automatically:
| Tool | Description |
|---|---|
mesh_send |
Send a message to an agent or role:xxx |
mesh_reply |
Reply to a message |
mesh_poll |
Check for unread messages |
mesh_agents |
List online agents |
mesh_status |
Mesh health overview |
mesh_roles |
List available roles |
mesh_room_create |
Create a discussion room |
mesh_room_send |
Send message to a room |
mesh_room_history |
View room message history |
mesh_room_list |
List all rooms |
mesh_room_join |
Join a room |
mesh_room_leave |
Leave a room |
| Command | Description |
|---|---|
meshterm init |
Configure server URL, API key, agent name |
meshterm setup <agent> |
One-command setup for IDE agents (kiro/claude/cursor/copilot/gemini). Writes MCP config, steering file, starts daemon. |
meshterm agent start |
One-command setup for terminal agents. Creates tmux session, starts CLI, starts mesh-client. (--name, --cli, --session) |
meshterm agent stop |
Stop a terminal agent cleanly (--name, --kill-session) |
meshterm agent list |
Show running agents with status |
Which do I use?
- IDE agent (Kiro, Claude, Cursor)? →
meshterm initthenmeshterm setup kiro- Terminal agent in tmux? →
meshterm initthenmeshterm agent start --name my-agent --cli "kiro-cli chat" --session my-agent
| Command | Description |
|---|---|
meshterm send <to> <message> |
Send message (direct or role:xxx, --broadcast for roles) |
meshterm poll |
Check for unread messages |
meshterm agents |
List registered agents |
meshterm status |
Show mesh health and overview |
| Command | Description |
|---|---|
meshterm room create <name> |
Create a room (--members, --mode) |
meshterm room list |
List rooms |
meshterm room send <name> <msg> |
Send to room |
meshterm room history <name> |
View room messages (--limit) |
meshterm room join/leave/close <name> |
Manage room membership |
| Command | Description |
|---|---|
meshterm roles |
List roles |
meshterm role create <name> |
Create a role (--agents, --priority, --fallback) |
| Command | Description |
|---|---|
meshterm server start |
Start the mesh server (--port, --secret, --store) |
These are building blocks used by setup and agent start. You typically don't need them directly.
| Command | Description | When to use |
|---|---|---|
meshterm daemon start |
Start background message injection daemon | Already have a tmux session, just need message push |
meshterm daemon stop/status |
Manage the daemon | Debugging daemon issues |
meshterm client start |
Foreground daemon (blocks terminal) | Debugging message injection |
meshterm mcp |
Start MCP server (stdio) | Custom MCP integration, not using setup |
meshterm tui |
Launch terminal dashboard | Visual overview of agents, messages, rooms |
All endpoints (except /health) require x-mesh-secret header.
| Method | Path | Description |
|---|---|---|
| GET | /health |
Health check (no auth) |
| Method | Path | Description |
|---|---|---|
| POST | /agents/register |
Register {name, type, host} |
| GET | /agents |
List agents |
| Method | Path | Description |
|---|---|---|
| POST | /messages |
Send {from_agent, to_agent, body, broadcast?} |
| GET | /messages/:agent?unread=true |
Get messages for agent |
| PATCH | /messages/:id/read |
Mark message read |
| GET | /messages/:agent/history?limit=50 |
Conversation history |
| Method | Path | Description |
|---|---|---|
| POST | /roles |
Create/update {name, agents, priority, fallback, capabilities} |
| GET | /roles |
List roles |
| GET | /roles/:name |
Get role details |
| Method | Path | Description |
|---|---|---|
| POST | /rooms |
Create {name, members, mode, moderator?} |
| GET | /rooms |
List rooms |
| GET | /rooms/:name |
Get room details |
| DELETE | /rooms/:name |
Close room |
| POST | /rooms/:name/join |
Join {agent} |
| POST | /rooms/:name/leave |
Leave {agent} |
| POST | /rooms/:name/messages |
Send {from_agent, body} |
| GET | /rooms/:name/messages?limit=50 |
Room history |
If you prefer Docker over running the server directly:
git clone https://github.com/KenEzekiel/meshterm.git
cd meshterm/docker
echo "MESH_SECRET=$(openssl rand -hex 16)" > .env
docker compose up -dPort 4200, localhost only by default. For remote access, expose via reverse proxy with SSL.
The Docker container connects to the npm_default network if available (for nginx proxy manager integration).
Sender Server Receiver
│ │ │
│ POST /messages │ │
│ ──────────────────────→ │ stores message │
│ │ │
│ │ daemon polls (5s) │
│ │ ←────────────────────── │
│ │ │
│ │ returns new messages │
│ │ ──────────────────────→ │
│ │ │
│ │ tmux send-keys │
│ │ "[mesh:sender] msg" │
│ │ ──────────────────────→ │
│ │ │
│ │ agent processes task │
│ │ │
│ │ mesh_reply (MCP tool) │
│ │ ←────────────────────── │
│ GET /messages?unread │ │
│ ──────────────────────→ │ │
│ │ │
│ reply │ │
│ ←────────────────────── │ │
The pipe is dumb. The agent is smart. meshterm just moves bytes between them.
meshterm is just HTTP — the only question is "can your machines reach the server?"
# Machine A (server)
meshterm server start --secret your-secret
# Machine B (agent)
meshterm init --server http://192.168.x.x:4200 --key your-secret --agent my-agent
meshterm setup kiro --session kiroOption 1: Tailscale (recommended, free)
# Install Tailscale on both machines
meshterm server start --secret your-secret # Machine A
meshterm init --server http://100.x.x.x:4200 ... # Machine BOption 2: ngrok (quick tunnel)
meshterm server start --secret your-secret # Machine A
ngrok http 4200 # → https://abc123.ngrok.io
meshterm init --server https://abc123.ngrok.io ... # Machine BOption 3: VPS (always online)
Deploy the server on a VPS (Hetzner, DigitalOcean, etc.), put it behind a reverse proxy with SSL. Both machines connect to the public URL. Most reliable for persistent setups.
- HTTP message broker
- tmux inject client
- CLI (send, poll, agents, status, roles, rooms)
- MCP server (13 tools)
- Role-based routing with priority + fallback
- Rooms (4 modes)
- TUI dashboard
- Background daemon
- Auto-setup for 5 agents
- npm published
- Message delivery states (queued → delivered → acknowledged)
- Per-agent API keys
- Structured error responses
- WebSocket push (replace polling)
MIT