Agent Relay is a local-first control plane for your own AI agent fleet.
It does not review code itself. It accepts a task in a web UI, launches configured agents, streams their logs, and stores their final JSON results.
The main current use case is code review across your own machines and CLI agents.
Simple example:
- A teammate sends you a GitHub PR link.
- You paste that PR into Agent Relay.
- Agent Relay runs your enabled agents, for example:
- Codex on the Mac mini
- Claude on the Mac mini
- Codex on the MacBook Pro
- Each agent uses its own tools and credentials to inspect the PR.
- You watch the live logs in one place and compare the final outputs.
- If one CLI exposes a session id, Agent Relay keeps it so you can continue later in that same session.
So the value today is not “replace GitHub review bots.” It is “give me one place to launch and observe my own agent reviews across local and remote machines.”
RelayThe coordinator and the web UI.HostA machine that can run agents.AgentOne runnable entry on a host, such as Codex, Claude, or Kimi.
The important part:
- the relay can launch agents directly on the same machine it runs on
- you only need a separate host process for other machines
So your Mac mini can be:
- the relay
- a local host for local agents
And you can add any number of extra remote hosts later.
Default local ports:
4310The actual Agent Relay app and API.4311A small internal proxy used only for serving the app under a Tailscale subpath.4320The host API on a remote machine, if you run a separate host.
Normal local usage:
http://127.0.0.1:4310/
Tailscale usage:
- Relay:
https://YOUR-HOSTNAME.ts.net/agent-relay/ - Host:
https://YOUR-HOSTNAME.ts.net/agent-host/health
Why two local relay ports exist:
4310is the real app4311is only the subpath shim for/agent-relay/
You normally do not open 4311 directly.
Install dependencies:
npm installCreate your env file:
cp .env.example .envStart the relay:
npm startOpen:
http://127.0.0.1:4310/
Current variables in .env.example:
AGENT_RELAY_PORTRelay port. Default4310.AGENT_HOST_PORTHost API port. Default4320.AGENT_HOST_TOKENShared bearer token used when Relay talks to a remote host.
If Relay and the agents live on the same machine, you can still keep the token set, but you do not need to run a separate host process there.
Committed examples:
Local machine-specific files:
config/agent-relay.local.jsonconfig/host.local.json
Rule of thumb:
- commit the
*.example.jsonfiles - edit the
*.local.jsonfiles on your own machines - the local files are ignored by Git
Main relay example config lives in config/agent-relay.example.json.
For your actual machine, edit:
config/agent-relay.local.json
Important fields:
portRelay port.agentsAll configured agent entries.labelName shown in the UI.hostLabelWhich host group the agent appears under in the UI.launcherHow Relay starts this agent.cwdWorking directory before the CLI starts.commandExecutable to run for local launchers.baseUrlRemote host URL forhttp-jsonagents.authTokenShared token for remote hosts.remoteAgentKeyThe remote agent id on that host.timeoutSecOptional hard timeout in seconds. Omit it for no timeout.
Current launcher values:
codex-localStarts localcodex.claude-localStarts localclaude.shell-jsonStarts a generic local CLI that returns JSON-like text. Good for tools likekimi.http-jsonSends the task to a remote host over HTTP.
{
"agents": {
"codex-mini": {
"label": "Codex on Mac mini",
"hostLabel": "Mac mini",
"launcher": "codex-local",
"cwd": "${HOME}",
"command": "codex"
}
}
}{
"agents": {
"kimi-mini": {
"label": "Kimi on Mac mini",
"hostLabel": "Mac mini",
"launcher": "shell-json",
"cwd": "${HOME}",
"command": "kimi",
"args": [
"--print",
"--output-format",
"text",
"--final-message-only",
"-p"
]
}
}
}{
"agents": {
"codex-mbp": {
"label": "Codex on MacBook Pro",
"hostLabel": "MacBook Pro",
"launcher": "http-json",
"baseUrl": "https://your-macbook-pro.ts.net/agent-host",
"authToken": "${AGENT_HOST_TOKEN}",
"remoteAgentKey": "codex"
}
}
}Host example config lives in config/host.example.json.
You only need this on a machine that should expose a remote host API.
For your actual machine, edit:
config/host.local.json
Example:
{
"port": "${AGENT_HOST_PORT}",
"listenHost": "127.0.0.1",
"authToken": "${AGENT_HOST_TOKEN}",
"agents": {
"codex": {
"label": "Codex local agent",
"launcher": "codex-local",
"cwd": "${HOME}",
"command": "codex"
}
}
}Use one relay, plus:
- local agents on the same machine when possible
- separate remote hosts only when you need agents on other machines
This is the practical mixed setup:
- Relay runs on the Mac mini
- the Mac mini also runs local agents directly
- the MacBook Pro runs one separate host process
Edit these exact files:
- On the Mac mini:
.env.examplecopied to.envconfig/agent-relay.local.json
- On the MacBook Pro:
.env.examplecopied to.envconfig/host.local.json
File:
.env
Example contents:
AGENT_RELAY_PORT=4310
AGENT_HOST_PORT=4320
AGENT_HOST_TOKEN=CHANGE_ME_SHARED_TOKENFile:
config/agent-relay.local.json
Example contents:
{
"agents": {
"codex-mini": {
"label": "Codex on Mac mini",
"hostLabel": "Mac mini",
"launcher": "codex-local",
"cwd": "${HOME}",
"command": "codex"
},
"claude-mini": {
"label": "Claude on Mac mini",
"hostLabel": "Mac mini",
"launcher": "claude-local",
"cwd": "${HOME}",
"command": "claude"
},
"kimi-mini": {
"label": "Kimi on Mac mini",
"hostLabel": "Mac mini",
"launcher": "shell-json",
"cwd": "${HOME}",
"command": "kimi",
"args": [
"--print",
"--output-format",
"text",
"--final-message-only",
"-p"
]
},
"codex-mbp": {
"label": "Codex on MacBook Pro",
"hostLabel": "MacBook Pro",
"launcher": "http-json",
"baseUrl": "https://your-macbook-pro.ts.net/agent-host",
"authToken": "${AGENT_HOST_TOKEN}",
"remoteAgentKey": "codex"
},
"kimi-mbp": {
"label": "Kimi on MacBook Pro",
"hostLabel": "MacBook Pro",
"launcher": "http-json",
"baseUrl": "https://your-macbook-pro.ts.net/agent-host",
"authToken": "${AGENT_HOST_TOKEN}",
"remoteAgentKey": "kimi"
}
}
}Notes:
codex-mini,claude-mini, andkimi-minirun directly on the Mac minicodex-mbpandkimi-mbpcall the remote MacBook Pro host over HTTP- if you do not want a local Kimi agent on the Mac mini, leave
"enabled": false
File:
.env
Example contents:
AGENT_RELAY_PORT=4310
AGENT_HOST_PORT=4320
AGENT_HOST_TOKEN=CHANGE_ME_SHARED_TOKENFile:
config/host.local.json
Example contents:
{
"port": "${AGENT_HOST_PORT}",
"listenHost": "127.0.0.1",
"authToken": "${AGENT_HOST_TOKEN}",
"agents": {
"codex": {
"label": "Codex local agent",
"launcher": "codex-local",
"enabled": true,
"cwd": "${HOME}",
"command": "codex"
},
"kimi": {
"label": "Kimi local agent",
"launcher": "shell-json",
"enabled": true,
"cwd": "${HOME}",
"command": "kimi",
"args": [
"--print",
"--output-format",
"text",
"--final-message-only",
"-p"
]
}
}
}Run order in that setup:
- you run
npm starton the Mac mini - you do not run
npm run hoston the Mac mini unless you specifically want to expose it as a remote host too - you run
npm run hoston the MacBook Pro
Verification:
- on Mac mini:
- open
http://127.0.0.1:4310/
- open
- on MacBook Pro:
- open
http://127.0.0.1:4320/health
- open
This is the easiest setup.
- Relay runs on Mac mini
- Codex/Claude/Kimi are installed on Mac mini
- Relay launches them directly
Steps:
- Add local agents to
config/agent-relay.local.json - Start Relay:
npm startThat is all.
You do not need to run npm run host on that same machine.
- Relay runs on Mac mini
- Mac mini can still run its own local agents directly
- MacBook Pro runs a separate host process for its own local agents
- Edit
config/agent-relay.local.json - Add remote agents with
launcher: "http-json"and the MacBook Pro host URL - Set
.env:
cp .env.example .envAt minimum:
AGENT_RELAY_PORT=4310
AGENT_HOST_TOKEN=CHANGE_ME- Start Relay:
npm start- Clone the repo
- Install dependencies:
npm install- Create
.env:
cp .env.example .envAt minimum:
AGENT_HOST_PORT=4320
AGENT_HOST_TOKEN=THE_SAME_TOKEN_AS_THE_RELAY- Edit
config/host.local.jsonso the local agents match what is installed on that MacBook Pro - Start the host:
npm run host- Verify:
http://127.0.0.1:4320/health
Relay itself runs on:
http://127.0.0.1:4310/
The local proxy on 4311 exists only to make this subpath work:
/agent-relay/
The helper script:
./configure-tailscale-serve.shsets Tailscale Serve to:
/agent-relay/ -> http://127.0.0.1:4311/agent-relay/
On the remote machine:
./host-configure-tailscale-serve.shThis exposes the host under:
https://YOUR-HOSTNAME.ts.net/agent-host/
If a CLI prints a recognizable session id, Relay stores it and shows it on past jobs.
That makes it possible to continue work later in the original agent session.
npm startStart Relay.npm run hostStart the host API on a machine that should expose remote agents.npm run buildBuild the frontend intopublic/.
This repo includes separate drafts for three integration paths:
- OpenClaw skill:
- Claude Project / system prompt draft:
- OpenAI Custom GPT / hosted agent draft:
They are intended as practical starting points for running Agent Relay from other AI control surfaces.