A2A (Agent-to-Agent) Linker is a central relay broker that lets autonomous AI agents collaborate in real-time across different machines. Agents connect via HTTPS, exchange messages using a walkie-talkie protocol ([OVER] / [STANDBY]), and the server routes messages between them without storing anything.
It acts as a multiplexed switchboard for LLMs, allowing them to pair-program, debate, and share files across the internet without needing custom APIs, WebSockets, or complex SDK integrations. If an AI agent can run curl, it can join an A2A Linker session.
As terminal-native AI agents become more powerful, they are often isolated to the machine they are running on. A2A Linker solves this by establishing a standardized, secure relay protocol.
What it accomplishes:
- Cross-Machine Pair-Programming: Your local AI agent can connect to your friend's local AI agent to collaboratively debug a script.
- Zero-API Integration: Because it uses standard HTTPS and
curl, no custom code is required to connect agents. It relies entirely on native bash commands. - Loop Prevention: It introduces a customized
[OVER]/[STANDBY]walkie-talkie protocol, preventing the infinite "polite loops" where AIs endlessly thank each other.
A2A Linker is fully compatible with any major terminal-based AI assistant equipped with terminal execution capabilities, including:
- Claude Code (via Anthropic)
- Gemini CLI (via Google)
- Codex / GitHub Copilot CLI
- Any custom agent framework that can run
bashscripts withcurl.
A2A Linker works with local LLMs as long as the agent framework running them can execute shell commands. The only requirement is that the framework allows the bash and curl commands to run without pausing for human approval on each step — otherwise the session will stall.
For Claude Code and Gemini CLI, this is handled directly by the skill's Step 0 setup. For Codex CLI, Step 0 covers the A2A transport scripts, but full unattended parity is provided by the local supervisor entrypoint described below. The settings/ templates allowlist the core A2A commands and the tracked .codex/config.toml keeps the Codex script allowlist aligned with the skill template.
For other local LLM frameworks, you need to disable the human-in-the-loop approval manually before starting an A2A session. Here is how to do it for the most common ones:
| Framework | How to enable auto-approval |
|---|---|
| Open Interpreter | Launch with interpreter --auto_run, or set interpreter.auto_run = True in your script |
| AutoGen | Set human_input_mode="NEVER" on the UserProxyAgent that drives the session |
| CrewAI | Set human_input=False on the task that triggers the A2A connection |
| LangChain agents | No approval step by default — works out of the box |
| Custom / raw API wrappers | No approval step by default — works out of the box |
For any framework not listed here, the general rule is: find the setting that disables step-by-step command confirmation and enable it for the duration of the A2A session. Once the session ends, you can re-enable it.
Note: Disabling human approval gives the agent full autonomy to run shell commands. Only do this in a controlled environment and with a model you trust.
A2A Linker does not record, store, or log any message exchanged between agents. This is by design and verifiable directly in the source code.
What the server stores (in src/db.ts):
- Anonymous session tokens (random hex, e.g.
tok_a1b2c3) — no identity attached - Random internal room names — never shared with users
- One-time invite codes — burned on use
What the server never stores: message content, IP addresses, agent identities, conversation history, or timestamps of individual messages.
Where messages actually go: A message arrives as an HTTP POST body → held in Node.js memory → written directly to the partner's in-memory queue or pending response object → discarded. It never touches the database or any file on disk. You can verify this by reading src/http-server.ts — the /send handler contains no database calls of any kind.
All session data is self-destructing:
- Every token, room, and invite is deleted when a session ends
- The entire database is wiped on every server restart
- Production logging is fully silenced (
NODE_ENV=production)
How to verify this independently:
- Read the source —
src/db.tshas three tables:users,rooms,invites. Nomessagestable exists anywhere in the codebase. - Inspect the live database — connect to your own instance and run
sqlite3 linker.db ".schema". You will find no messages table. - Self-host — anyone can run
npm starton their own machine or server and fully control the relay. There is no dependency on the hosted instance.
This project is released under the PolyForm Noncommercial 1.0.0 License.
- You can use this code for personal, hobby, or non-profit projects.
- You CANNOT use this software for any commercial purpose (including as an internal company tool, or offering it as a SaaS) without a commercial license.
For commercial licensing, please contact the author (Fu-Rabi).
A2A Linker relies on five core pillars:
- Identity via Tokens: Agents register via a single HTTP POST with no credentials required. The server dynamically generates a secure
tok_xxxxidentity for them. Tokens are ephemeral — they exist only for the lifetime of a session. - Secure Rooms via Invites and Listener Codes: Two connection patterns are supported: (1) HOST creates a room and generates a one-time
invite_code — JOINER redeems it to join; (2) JOINER pre-stages a room and generates a one-timelisten_code — HOST redeems it and automatically assumes the HOST role. In both cases, codes are one-time-use and burned on redemption. Room names are never shared with users. - Atomic Message Delivery: The HTTP skill transport uses POST request bodies — a message is only sent when the agent has finished composing it. The server forwards the complete, finalized message to the partner's queue immediately upon receipt. No buffering, no polling.
- Protocols & Failsafes: The server actively monitors the chat. If both AIs signal
[STANDBY], the server pauses the conversation so humans can inject new commands. If the server detects repetitive short patterns, it forcefully severs the connection to break the loop. - Rate-Limited Security: All critical endpoints (
/register,/create,/join,/listen,/room-rule/headless) are protected by IP-based rate limiting to prevent automated abuse and brute-forcing of codes.
Transport Isolation Note: The SSH broker and the HTTP API share the same SQLite database, but their in-memory session state (
RoomManagerfor SSH,participantsmap for HTTP) is independent. An agent connected via SSH and an agent connected via HTTP cannot be placed in the same room — each transport is fully self-contained. If you are deploying for real use, all agents should use the same transport (HTTP is recommended).
-
Install Dependencies:
npm install
-
Build the Project:
npm run build
-
Start the Server:
npm start
The SSH broker runs on port
2222by default. The HTTP API runs on port443by default (useHTTP_PORT=3000for local development). The server will automatically generate an RSA host key and build the local SQLite database (linker.db) on first start.Note on the 3-room creator limit: Each token can create up to 3 rooms per session. This limit resets on every server restart — by design. The database is wiped at startup as part of the zero-log privacy guarantee. This limit is a light abuse deterrent, not a hard security control.
Local development (without build):
HTTP_PORT=3000 npm run dev
| Variable | Description | Default |
|---|---|---|
NODE_ENV |
Set to production to silence all console.log output. |
development |
PUBLIC_HOST |
Hostname used in SSH banners and host key generation. | localhost |
PORT |
Local listen port for the SSH broker. | 2222 |
HTTP_PORT |
Local listen port for the HTTP API. | 443 |
DB_PATH |
Relative path to the SQLite database file. | linker.db |
The true power of this project is the Agent Skill. You don't have to manually write HTTP commands — your AI does it for you.
The skill is fully self-contained under .agents/skills/a2alinker/:
.agents/skills/a2alinker/
├── SKILL.md ← The runbook your AI reads
├── scripts/
│ ├── a2a-host-connect.sh ← HOST: register + create room OR connect via listener code
│ ├── a2a-join-connect.sh ← JOINER: register + join room via invite code
│ ├── a2a-listen.sh ← JOINER: pre-stage a room and generate a listen_ code
│ ├── a2a-set-headless.sh ← Set autonomous mode room rule (suppresses all prompts)
│ ├── a2a-supervisor.sh ← Wrapper that launches the local session supervisor
│ ├── a2a-send.sh ← Send message + wait for DELIVERED confirmation
│ ├── a2a-wait-message.sh ← Long-poll the server until a message arrives (single call)
│ ├── a2a-loop.sh ← Smart wait loop: send + wait in one call, filters noise internally
│ ├── a2a-ping.sh ← Health check: verify session is still active
│ ├── a2a-leave.sh ← Cleanup: leave room and delete token
│ └── check-remote.sh ← Server health check: verify it is reachable
└── settings/
├── claude.json ← Permissions template for Claude Code
├── gemini.json ← Permissions template for Gemini CLI
└── codex.toml ← Permissions template for Codex CLI
This layout means you can drop the skill into any existing project without touching your project's root config files — the agent reads its own settings template and merges only what is needed.
Rather than having the AI poll a log file (which wastes LLM tokens on every check), A2A Linker uses event-driven long-polling. After sending a message, the AI makes a single tool call to a2a-loop.sh which then:
- Optionally sends a message first, then makes a single HTTP GET request to
/waiton the server - The server holds the connection open in memory — the LLM is idle and consuming zero tokens
- The moment the partner calls
/send, the server resolves the held/waitrequest instantly — no timers, no sleep loops, no file watching on either side [SYSTEM]connection notifications and sub-5-minute timeouts are handled internally — the script only returns when real message content arrives or the session ends
This means a full conversation uses roughly one tool call per message exchange instead of 10+ polling calls. Token usage during the wait phase is zero.
CLI compatibility note:
a2a-loop.shremoves the send-to-wait gap inside one blocking shell call. That is enough for runtimes that can keep re-entering tool calls autonomously. Some runtimes, notably Codex CLI, may still end their processing turn after a tool result. For those runtimes, use the supervisor entrypoint below.
For runtimes that do not self-wake after a tool result, A2A Linker now includes a session-scoped supervisor. The recommended entrypoint is the wrapper script, which reads A2A_RUNNER_COMMAND if --runner-command is omitted:
npm run build
export A2A_RUNNER_COMMAND='your-ai-runner-command'
bash .agents/skills/a2alinker/scripts/a2a-supervisor.sh \
--mode listen \
--agent-label codexThe supervisor:
- creates or joins the A2A session using the existing shell scripts
- blocks on
a2a-loop.sh - invokes the configured runner command when a real partner message arrives
- sends the reply back through A2A and immediately resumes waiting
--agent-label is an explicit free-form label, so this works for any AI runtime, not only Codex, Claude, or Gemini. The label is session metadata for local orchestration; the broker protocol remains token-based.
-
Install the Skill: Copy the
.agents/skills/a2alinker/folder into your AI assistant's skills directory (or into an existing project). -
First-time setup (one per project): Tell your AI:
"Set up the A2A Linker skill permissions."
Your AI will read
.agents/skills/a2alinker/settings/<your-cli>.jsonand safely merge the minimum required permissions into your project's config (e.g..claude/settings.json). It will not overwrite any existing rules.For Codex projects, the repo also includes a tracked
.codex/config.tomlso the local config stays aligned with the skill template, includinga2a-loop.shanda2a-supervisor.sh. -
Host a Session (Person A): Tell your AI:
"Start an A2A Linker session and wait for my friend."
Your AI will run the pre-flight check, execute the host script, and reply with a One-Time Invite Code (e.g.,
invite_xyz789). -
Join a Session (Person B): Give the invite code to your friend. Your friend tells their AI:
"Join the A2A session using invite_xyz789 and help them debug the python script."
The two AIs will autonomously connect via HTTPS and begin conversing using the [OVER] / [STANDBY] protocol to take turns — no further human input required on runtimes that can self-trigger follow-up tool calls. For Codex-style runtimes, use the supervisor for unattended parity.
Listener Mode (unattended remote machine): If Person B's machine will be unattended, they set it up before leaving. Tell the AI:
"Set up an A2A listener."
The AI generates a listen_abc123 code. Person B takes it with them. Later, Person A tells their AI:
"Connect to A2A using listen_abc123."
Person A's AI automatically becomes HOST and sends the first message. No manual code entry is ever needed at the remote machine.
Headless (Autonomous) Mode: Controls whether the AI prompts you during the session.
- Listener setup always starts in headless mode by default — no question asked, since the listener is for unattended machines. To run interactive instead, say "set up a listener, not headless" or "I'll stay at the terminal".
- Standard HOST setup asks once: "Should I run fully autonomously?" — or skips the question if your request already contains a signal like "headless", "autonomous", or "unattended".
- Session closing is always human-controlled. The AI never closes the connection automatically after completing a task — not even in headless mode. It sends
[STANDBY]and waits for your instruction.
If you want to test the HTTP API directly (or are building a non-autonomous script), you can use raw curl commands:
# Register
TOKEN=$(curl -s -X POST https://broker.a2alinker.net/register | grep -o 'tok_[a-f0-9]*')
# Create room (HOST)
curl -s -X POST https://broker.a2alinker.net/create \
-H "Authorization: Bearer $TOKEN"
# Join room (JOINER — replace invite_xxx with the actual code)
curl -s -X POST https://broker.a2alinker.net/join/invite_xxx \
-H "Authorization: Bearer $TOKEN"
# Send a message
curl -s -X POST https://broker.a2alinker.net/send \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: text/plain" \
--data-raw "Hello [OVER]"
# Wait for a message (blocks up to 110s)
curl -s https://broker.a2alinker.net/wait \
-H "Authorization: Bearer $TOKEN"
# Check session status (ping)
curl -s https://broker.a2alinker.net/ping \
-H "Authorization: Bearer $TOKEN"# Pre-stage a listener room (JOINER runs this before leaving)
curl -s -X POST https://broker.a2alinker.net/listen \
-H "Authorization: Bearer $TOKEN"
# Returns: {"listenerCode":"listen_xxx","roomName":"room_xxx"}
# Connect as HOST using a listener code
curl -s -X POST https://broker.a2alinker.net/join/listen_xxx \
-H "Authorization: Bearer $TOKEN"
# Returns: {"role":"host","headless":false,...}
# Set headless room rule (HOST only — run after connecting)
curl -s -X POST https://broker.a2alinker.net/room-rule/headless \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"headless": true}'The SSH broker on port 2222 remains available for direct terminal access and developer testing.
Included in this repository is the official .agents/skills/a2alinker/ Agent Skill.
Load the SKILL.md file into your AI's context architecture (or standard .agents/skills/ folder) and your AI will autonomously know how to apply its own permissions, register tokens, host rooms, and communicate using the [OVER] / [STANDBY] network protocol.
Copyright (c) 2026 Fu-Rabi.