Control Claude Code from Telegram — one topic per project, no SSH, no terminal babysitting.
Send a message from your phone. Claude thinks inside the project directory. Replies with formatted code, logs, diffs — right back in the same Telegram topic.
Runs in Docker. One command to deploy, auto-restarts on crash, Claude Code auto-updates on every start:
curl -O https://raw.githubusercontent.com/shaike1/relay/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/shaike1/relay/main/.env.example
cp .env.example .env && nano .env
docker compose up -dThere are other ways to control Claude remotely. Topix Relay does something distinct.
Topics as a project switchboard
Telegram's forum topics are the core insight. One topic = one project — not by convention, but structurally: messages in topic A never reach topic B, notifications are per-topic, search is scoped per-topic. Open your Telegram group and you have a full project dashboard: every running Claude session, its history, and its current status. It's Slack sidebar UX, built on infrastructure you already use.
You get these for free on every topic, without any extra code:
- Per-project notification control (mute a quiet project, pin a critical one)
- Searchable history scoped to each project
- Any team member you add to the group can see and interact with any project's topic
Conversation persistence — not just command execution
Relay isn't "send a shell command, get output back." Claude holds full conversation context across restarts. When the server reboots, Claude resumes exactly where it left off: the same decisions, the same in-progress plan, the same awareness of what was tried and why. This is what makes async mobile development actually work — you pick up where you left off, from your phone, hours later.
Zero additional apps
Telegram is already on your phone. There's no SSH client, no web UI to keep open, no VPN, no port to expose. The interface you already use for messaging is the interface for your dev environment.
Multi-agent — sessions talk to each other
Every Claude session is a peer. An orchestrator session can query list_peers, then message_peer to delegate subtasks to specialized sessions running in parallel — all without a human in the loop. Build a frontend, deploy an API, and run tests simultaneously, with sessions coordinating among themselves and reporting back.
One bot, many servers
A single Topix Relay instance controls sessions across multiple servers over SSH — local and remote — from one Telegram group. Add a remote host once, then /new root@server /path provisions everything: topic, tmux session, MCP config, and Claude launch.
Running Claude Code on a remote server is powerful but fragile:
- You SSH in, attach to tmux, start a session
- You step away — your SSH connection drops, or you close your laptop
- You come back, SSH in again, find tmux, figure out where things left off
- The server reboots — everything is gone, you rebuild from scratch
This is the normal remote dev workflow. It works, but it's constant overhead: maintaining connections, babysitting sessions, manually restarting things after downtime.
Topix Relay removes that entirely. Your projects run as persistent tmux sessions inside a Docker container with restart: always. Claude auto-resumes its last conversation on restart, and Claude Code auto-updates on every container start. You interact through Telegram — which is always open on your phone anyway. A dropped SSH connection changes nothing. A server reboot? The container comes back up, Claude resumes, your Telegram topic is right where you left it.
To add a new project, send one command in Telegram:
# Local project on the relay server
/new /path/to/project
# Project on a remote server over SSH
/new root@your-backup-host /root/myproject
# With a custom topic name
/new root@your-backup-host /root/myproject my-app
In one command, Topix Relay does all of this automatically:
- Creates the Telegram topic in your supergroup (gets a
thread_id) - Creates the project folder on the target host if it doesn't exist
- Writes
.mcp.jsoninto the project folder, wired to the new topic'sthread_id - Creates a tmux session on the host (local or remote via SSH) in the project directory
- Launches Claude in a self-restarting loop —
--continueto resume any prior conversation, falling back to a fresh start if none exists - Registers the session in
sessions.jsonso it survives relay restarts - Sends a confirmation message into the new topic so it's immediately live
For remote projects, Topix Relay SSHes in to provision everything — no manual setup on the remote host needed.
Phone
↓
Telegram topic (one per project)
↓
Relay bot (single getUpdates long-poll)
↓
/tmp/tg-queue-{THREAD_ID}.jsonl
↓
MCP server · mcp-telegram/ (tails queue file)
↓
Claude Code (running in project directory)
↓
send_message → Telegram topic
↓
Phone
Relay now follows a strict single-speaker model for Telegram topics:
@RiGHT_AI_BoTis the only visible Telegram-facing botsession-driveris the orchestrator that maps topic → session, filters queue traffic, and sends final replies@Cody_Code_bot/ Claude Code is the internal execution engine and should return text only
That means:
- one visible bot per topic
- one execution session per topic
- no direct Telegram tool use from Claude/Cody runtime
- no optimizer/startup/system chatter sent back to users
See ARCHITECTURE.md for the concrete policy.
For the next implementation steps, see RELAY_ROADMAP.md.
Telegram's forum topics give each thread a unique message_thread_id. Relay uses that ID as the key for everything:
- One topic = one project. Each Telegram topic maps to exactly one tmux session running Claude Code in a specific directory. Messages stay isolated — no cross-talk between projects.
- One long-poll, many consumers. Telegram returns
409 Conflictif two processes callgetUpdateswith the same bot token. The Relay bot holds the single long-poll and writes each incoming message to a queue file named after the topic's thread ID:/tmp/tg-queue-{THREAD_ID}.jsonl. Each MCP server instance reads only its own file — no conflicts, no duplicated API calls. - Queue files as the handoff. The queue file decouples the Topix Relay bot from Claude's lifecycle. If Claude restarts mid-session, Relay keeps running and the queue keeps filling. When Claude comes back up, the MCP server resumes tailing from where it left off.
sessions.jsonas the source of truth. The mapping ofthread_id → session name → project path → hostlives insessions.json. Add a line and Relay knows which tmux session to write to and which queue file to update.
| Path | What it is |
|---|---|
bot.py |
Relay bot — runs once, globally. Holds the Telegram long-poll, fans messages to queue files, provisions tmux sessions. |
mcp-telegram/ |
MCP server — one instance per project. Tails its queue file and delivers messages to Claude as notifications/claude/channel events. |
CLAUDE_TEMPLATE.md |
Paste into your project's CLAUDE.md to tell Claude how to behave on Telegram. |
watchdog.sh |
Deploy to backup server. Monitors primary relay every 15s, activates backup relay after 45s down, sends Telegram alert on failover/recovery. |
self-monitor.sh |
Run via cron on primary. Detects relay outage, attempts auto-restart, sends direct Telegram alert if restart fails. |
sync-sessions.sh |
Run via cron on primary. Pushes host-flipped sessions.json to backup server every 5 minutes. |
- Docker + Docker Compose
- A Telegram bot token from @BotFather
- A Telegram Supergroup with Topics enabled
- SSH key auth for any remote hosts (no password prompts)
Everything else (Python, Bun, Node.js, Claude Code) is bundled in the Docker image.
- Open @BotFather, send
/newbot, follow the steps, copy the token - Disable privacy mode: BotFather →
/mybots→ your bot → Bot Settings → Group Privacy → Turn off
- Create a Telegram group → Settings → Topics: Enable
- Add your bot as Admin with "Manage Topics" permission
curl -O https://raw.githubusercontent.com/shaike1/relay/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/shaike1/relay/main/.env.example
cp .env.example .envEdit .env:
TELEGRAM_BOT_TOKEN=your_token_here
OWNER_ID=your_telegram_user_id
GROUP_CHAT_ID=-1001234567890Then start:
docker compose up -d
docker compose logs -fThat's it. The container pulls from Docker Hub, starts the bot, and auto-restarts on crash or reboot.
To find your OWNER_ID, send /start to @userinfobot.
To find GROUP_CHAT_ID, add the bot to your group and call getUpdates — look for chat.id (a large negative number).
From Telegram, send this in your group (General topic):
/new /path/to/your/project
Or for a project on a remote server:
/new root@your-server /path/to/project
Topix Relay creates the topic, wires up the MCP server, launches Claude — all in one command. Send a message in the new topic and Claude responds.
Manual wiring (optional): If you prefer to set up a project without
/new, see the manual MCP setup section below.
Skip this if you use /new. This is for wiring an existing project manually.
In your project folder, create .mcp.json:
{
"mcpServers": {
"telegram": {
"command": "bun",
"args": ["run", "--cwd", "/path/to/relay/mcp-telegram", "--silent", "start"],
"env": {
"TELEGRAM_THREAD_ID": "YOUR_THREAD_ID"
}
}
}
}The MCP server reads its credentials from ~/.claude/channels/telegram/.env:
TELEGRAM_BOT_TOKEN=your_token_here
TELEGRAM_CHAT_ID=-1001234567890Copy CLAUDE_TEMPLATE.md to your project root as CLAUDE.md, then run claude in the project directory.
Docker gives you two things at once: easy deployment (one command on any server) and isolation (bot + dependencies self-contained, nothing touches the host OS).
| File | Purpose |
|---|---|
Dockerfile |
Builds the image: Ubuntu 24.04, Python, Bun, Node.js, Claude Code CLI |
docker-compose.yml |
Defines volumes, restart policy, env injection |
docker-entrypoint.sh |
Runs claude update --yes on every container start, then launches bot.py |
.dockerignore |
Keeps secrets and state out of the image |
# Download just the compose file and env template
curl -O https://raw.githubusercontent.com/shaike1/relay/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/shaike1/relay/main/.env.example
# Fill in your credentials
cp .env.example .env && nano .env
# Start
docker compose up -d
# Follow logs
docker compose logs -fNo git clone, no build step — shaikeme/topix-relay:latest is pulled from Docker Hub automatically.
git clone https://github.com/shaike1/relay
cd relay
# In docker-compose.yml, comment out `image:` and uncomment `build: .`
docker compose up -d --buildClaude Code auto-updates on every container start via docker-entrypoint.sh — no rebuild needed. Just restart:
docker compose restartTo pull a newer Topix Relay image:
docker compose pull && docker compose up -d| Data | Where it lives | Why |
|---|---|---|
.env (bot token, IDs) |
Host, injected via env_file |
Never baked into image |
/root (entire home dir) |
Host volume | Covers ~/.claude, SSH keys, all local session project paths in one mount |
If all your sessions are remote (SSH), you can replace the /root mount with targeted mounts:
volumes:
- ~/.claude:/root/.claude
- ~/.ssh:/root/.ssh:roClaude Code conversation history lives in ~/.claude — which is volume-mounted. It survives container restarts and image rebuilds. The only thing that doesn't survive a restart is the tmux scrollback buffer (in-memory terminal output), same as a server reboot.
- Remote sessions (
"host": "root@server"insessions.json) — no extra config. The container SSHes out to the remote host as usual. - Local sessions (
"host": null) — Claude runs on the host in tmux. The container communicates via two shared mounts:/root:/root— project directories and~/.claudestate/tmp:/tmp— queue files (/tmp/tg-queue-*.jsonl) and tmux socket (/tmp/tmux-0/default)
# 1. Stop the current relay
systemctl stop relay
systemctl disable relay
# 2. Start the Docker container
docker compose up -d --build
# 3. Verify the bot is running
docker compose logs -fSessions restart automatically — the bot reads sessions.json and relaunches Claude in each tmux window on startup.
Because every session runs with --remote-control, each Claude instance registers itself with Anthropic's infrastructure and generates a claude.ai/code/session_... URL. This means you have three parallel interfaces to every project — all talking to the same live session:
| Interface | How to access | Notes |
|---|---|---|
| Telegram | Send a message in the topic | Always available while bot is running |
| Web / mobile app | Open the claude.ai/code/session_... URL |
Works in any browser or the Claude mobile app |
| Terminal | ssh server → tmux attach -t session-name |
Direct shell access |
The session URL is printed in the tmux pane each time Claude starts:
/remote-control is active. Code in CLI or at
https://claude.ai/code/session_<your-session-id>
Tip: The URL changes on each restart. Use /snap to capture the current pane and find the latest URL, or add a /url bot command to extract and send it directly to the Telegram topic.
All commands are restricted to OWNER_ID.
| Command | Description |
|---|---|
/new [user@host] /path/to/project [name] |
Create a topic, tmux session, and .mcp.json. Host optional (defaults to local). Name optional (defaults to dir name). |
/discover |
Scan all known hosts for Claude project history not yet connected to a topic. Inline buttons to connect each one. |
/addhost [user@]host |
Register a host for /discover scans. Tests SSH connectivity first. |
/removehost [user@]host |
Remove a host. With no args, lists current hosts. |
| Command | Description |
|---|---|
/claude |
Start or resume Claude in this topic's tmux session |
/restart |
Quit Claude gracefully then re-launch (resumes latest session) |
/restart_all [host] |
Restart all sessions across all hosts — useful after settings changes |
/kill |
Send Ctrl+C to the session |
/snap |
Snapshot the last 50 lines of the tmux pane |
/mcp_add <name> <binary> [args...] [KEY=VAL...] |
Install an MCP server and restart Claude |
/link [session] |
Get a direct t.me link to any session's topic |
/upgrade |
Upgrade Claude Code on all hosts, then restart all sessions |
/mcp_add example:
/mcp_add stitch stitch-mcp proxy STITCH_API_KEY=abc123
Resolves the binary's full path on the target host (handles npm/nvm installs not in Claude's PATH), adds it via claude mcp add-json, and restarts Claude — all from Telegram.
| Command | Description |
|---|---|
/sessions |
List all configured sessions |
/status |
Show topic↔session mappings |
| Tool | Description |
|---|---|
send_message |
Send text to the topic (HTML: <b>, <i>, <code>, <pre>). Optional buttons param for inline keyboards. |
edit_message |
Edit a previously sent message in-place |
typing |
Show typing indicator (~5s) |
fetch_messages |
Get recent message history from this session |
send_file |
Send a file from the server filesystem to the Telegram topic (logs, exports, generated files, etc.) |
list_peers |
List all other active Claude sessions in the relay — session name, host, path, last activity |
message_peer |
Send a message directly to another Claude session (peer-to-peer between agents) |
react |
Add an emoji reaction to a message — 👀 working, ✅ done, ❌ error |
Claude can send messages with clickable buttons. When a button is pressed, the label arrives as a regular message:
send_message(text="Continue?", buttons=[["✅ Yes", "❌ No"]])
send_message(text="Choose phase:", buttons=[["Phase 1", "Phase 2"], ["Cancel"]])Buttons are ideal for confirmations, choices, and multi-step workflows — Claude uses them instead of asking the user to type.
When a task takes more than ~2 minutes (builds, test suites, deployments), Claude sends brief progress updates so you're not left wondering if anything is still happening:
⏳ Build running — 3 min so far...
✅ Build done. Deploying...
This is baked into CLAUDE_TEMPLATE.md so it applies to all sessions.
- Photos sent to a topic are downloaded to
/tmp/tg-photo-{id}.jpg(SCP'd to remote hosts automatically). Claude receives[Photo: /tmp/...] captionand can read the image with theReadtool. - Files can be sent back to Telegram with
send_file— useful for sharing logs, exports, or generated artifacts directly in the chat.
Every Claude session in the relay is a peer. Sessions can discover and message each other directly — no human in the loop.
Example: Your orchestrator session in /root/myproject can:
- Call
list_peersto see all active sessions:backend,frontend,infra - Call
message_peer(session="backend", text="Deploy the API to staging")to delegate work - The
backendsession receives it as a regular user message, does the work, and canmessage_peerback with results
This enables patterns like:
- Orchestrator → workers: one Claude breaks down a task and distributes subtasks to specialized sessions
- Parallel execution: multiple sessions work independently on different parts of a problem simultaneously
- Event-driven pipelines: a CI session triggers a deploy session on build success
Configure a dedicated "peers topic" in peers-topic.json for cross-session coordination messages to appear in one Telegram thread instead of flooding individual project topics.
By default the bot uses getUpdates polling (2s interval). For instant delivery, configure webhook mode.
Prerequisite: Telegram must be able to reach your server directly over the internet. This means:
- Your server has a public IP (not behind NAT or a Tailscale-only address)
- The chosen port (443, 80, 88, or 8443 — only these are supported by Telegram) is open in both the OS firewall and any cloud security groups (e.g. Oracle Cloud VCN, AWS Security Groups, etc.)
1. Generate a self-signed cert for your public IP:
openssl req -x509 -newkey rsa:2048 \
-keyout /etc/ssl/private/relay-webhook.key \
-out /etc/ssl/certs/relay-webhook.crt \
-days 3650 -nodes \
-subj "/CN=<your-public-ip>" \
-addext "subjectAltName=IP:<your-public-ip>"
# Caddy runs as non-root — give it read access to the key
chown root:caddy /etc/ssl/private/relay-webhook.key
chmod 640 /etc/ssl/private/relay-webhook.key2. Add a Caddy block (or nginx equivalent) to terminate TLS and forward to the bot:
:88 {
tls /etc/ssl/certs/relay-webhook.crt /etc/ssl/private/relay-webhook.key
reverse_proxy localhost:18793
}
3. Add to .env:
WEBHOOK_URL=https://<your-public-ip>:88
WEBHOOK_PORT=18793
WEBHOOK_CERT=/etc/ssl/certs/relay-webhook.crt
4. Open the port in your firewall:
iptables -I INPUT 2 -p tcp --dport 88 -j ACCEPT
# Also open in cloud console if using Oracle/AWS/GCP5. Restart the relay:
systemctl restart relayThe bot registers the webhook automatically on startup and uploads the self-signed cert to Telegram. Verify with:
curl -s "https://api.telegram.org/bot<TOKEN>/getWebhookInfo" | python3 -m json.toolCheck last_error_message — if it says Connection timed out, the port is not reachable from the internet (cloud security group or NAT issue). In that case, polling works just as well for this use case.
sessions.json supports remote hosts via SSH:
[
{
"thread_id": 42,
"session": "myproject",
"path": "/root/myproject",
"host": null
},
{
"thread_id": 43,
"session": "remote-project",
"path": "/root/remote-project",
"host": "root@your-server-ip"
}
]Relay SSHes to write queue files and provision sessions on remote hosts. SSH key auth required (no password prompts).
Relay is designed to run on one server (only one process can hold the Telegram long-poll). For high availability, use a primary/backup pattern with automatic failover.
Deploy watchdog.sh on your backup server as a systemd service. It monitors the primary every 15 seconds and activates the backup relay after 45 seconds of downtime:
# On backup server
cp watchdog.sh /root/relay/watchdog.sh
chmod +x /root/relay/watchdog.shEdit watchdog.sh and set PRIMARY to your primary server's SSH address.
# /etc/systemd/system/relay-watchdog.service
[Unit]
Description=Topix Relay Watchdog — failover if primary is down
After=network.target
[Service]
Type=simple
User=root
ExecStart=/bin/bash /root/relay/watchdog.sh
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.targetsystemctl enable --now relay-watchdogThe backup relay needs its own sessions.json with hosts flipped (what is null on primary becomes "root@primary-ip" on backup, and vice versa). Use sync-sessions.sh to keep it in sync automatically.
Run sync-sessions.sh on the primary via cron to keep the backup's sessions.json up to date:
# Edit sync-sessions.sh and set the backup server address
(crontab -l; echo "*/5 * * * * /root/relay/sync-sessions.sh >> /root/relay/sync.log 2>&1") | crontab -Run self-monitor.sh on the primary via cron. If the relay goes down and systemctl restart fails, it sends a direct Telegram alert (bypassing the relay itself):
(crontab -l; echo "*/2 * * * * /root/relay/self-monitor.sh >> /root/relay/self-monitor.log 2>&1") | crontab -| Session location | Primary down | Primary recovers |
|---|---|---|
| Backup server | ✅ continues working | ✅ continues working |
| Primary server | ❌ unavailable | ✅ resumes automatically |
Telegram alerts are sent directly via the Bot API (not through Topix Relay) so they arrive even when Relay itself is down.
Cause: bun is not in the system PATH Claude uses when spawning MCP servers.
Fix:
sudo ln -sf ~/.bun/bin/bun /usr/local/bin/bun
which bun # should show /usr/local/bin/bunWhen adding MCP servers installed via npm/nvm, Claude Code spawns them with a minimal PATH — ~/.nvm/... is not included, so the binary won't be found by name alone.
Always use the full binary path:
# Find the full path first
which stitch-mcp # e.g. /root/.nvm/versions/node/v22.22.0/bin/stitch-mcp
# Add with full path + any required env vars
claude mcp add-json stitch '{
"command": "/root/.nvm/versions/node/v22.22.0/bin/stitch-mcp",
"args": ["proxy"],
"env": {"STITCH_API_KEY": "your-key"}
}' -s localThen restart Claude to pick up the new MCP. Use session-run.sh for a one-liner:
./session-run.sh <session-name> claude mcp add-json stitch '{...}' -s localCause: Two processes are calling getUpdates with the same token.
Fix: Only the Relay bot (bot.py) should poll. The MCP server reads queue files only — it never calls getUpdates.
- Is Topix Relay running?
systemctl status relayortmux ls - Does the queue file exist?
ls /tmp/tg-queue-*.jsonl - Is
TELEGRAM_THREAD_IDin.mcp.jsoncorrect? - Did you restart Claude after changing
.mcp.json?
- Only
OWNER_IDcan issue commands or have messages routed — everyone else is silently ignored - Keep
.envprivate — never commit it (it's in.gitignore) - Use a private Telegram group
- Queue files in
/tmp/are ephemeral and local to each machine
MIT