✅ Cross-platform Access | Privacy First | Bilingual Support | Fast Response
✅ Code Generation | Auto-fixing | Voice Control | Document Analysis
✅ Bun | Telegram | Claude Code | Codex CLI
Tele-Bot is a high-performance OpenClaw-like Telegram AI that brings Claude Code and Codex CLI into your pocket, featuring lower latency and native token cache hits compared to OpenClaw. This project moves the Agent assistants from your PC to mobile devices, tablets, and smartwatches, enabling native, PC-equivalent interactive experiences anywhere, anytime via text, voice, or media files.
We designed Tele-Bot to seamlessly bridge the gap between mobile convenience and desktop-level AI coding power.
| Feature Name | Description |
|---|---|
| Remote Control | Securely bind the bot to specific local project directories using the /bind command. The AI executes tasks strictly within this isolated workspace. |
| Multi-modal Inputs | Send voice memos (automatically transcribed via OpenAI Whisper), screenshots for visual analysis, or PDF documents for text extraction. |
| Live Feedback Stream | Watch the AI's thought process unfold in real-time. The bot streams tool calls (like reading files or running bash commands) directly to your chat. |
| Provider Switching | Switch between Claude Code and Codex on the fly using /provider, adapting to whichever model best suits your current coding task. |
| Secure by Design | Your local machine remains safe. Built-in mechanisms include strict rate limiting, path validation (preventing access outside bound directories), and a configurable command blacklist. |
| Session Persistence | Conversations are automatically saved locally. You can resume previous sessions seamlessly even after the bot restarts. |
Tele-Bot is not a simple API wrapper; it is a high-throughput, event-driven Proxy Gateway designed for Agentic Workflows. Its core philosophy is the modular decoupling of the ingress/interaction layer from the underlying reasoning engines.
graph TD
A[Telegram Client] -->|Multi-modal Payload| B(Ingress: grammY Concurrent Runner)
B --> C{Modality Router & Pre-processing}
C -->|Audio/Voice| D[Whisper ASR Pipeline]
C -->|PDF/Archives| E[pdftotext Extraction]
C -->|Vision/Text| F[State Machine & Access Control]
D --> F
E --> F
F -->|Auth, Rate Limit, Path Confinement| G(Provider Abstraction Layer)
subgraph ClaudeEngine [Claude Code CLI Ecosystem]
direction TB
H1(Native Token Cache Hit)
H2(Low-latency IPC Stream)
H3(Artifact & Tool Call Parsing)
H1 --> H2 --> H3
end
subgraph CodexEngine [Codex CLI Ecosystem]
direction TB
I1(Direct Execution Bridging)
I2(Local Workspace Sandboxing)
I3(Fast Reasoning Output)
I1 --> I2 --> I3
end
G -->|Spawn Process| H1
G -->|Spawn Process| I1
G -.->|Future Extension| J[Aider / OpenDevin / AutoGPT]
H3 --> K(Streaming Observer)
I3 --> K
K -->|Delta Updates via editMessageText| A
- 🔌 Pluggable Provider Abstraction: A strict
AgentProviderinterface defines standard lifecycles (Run, Abort, Stream). Integrating any new CLI-based or standard I/O LLM engine (e.g., future integrations of Aider or OpenDevin) requires minimal adapter code without intruding on the core routing logic. - 🛡️ Local Workspace Isolation: Adopts a "local machine as compute node" architecture. It features a token-bucket rate limiter, regex-based dangerous command blacklisting, and strict path confinement (preventing directory traversal) enforced via the
/bindworkspace mechanism. - 🌊 Incremental Streaming Observer: Shatters the "black box" experience of traditional Q&A chatbots. By capturing stdout/stderr and Server-Sent Events (SSE) from the underlying CLI processes, it leverages Telegram's message editing API to render the Agent's Tool Calls (Bash/Read/Write) and Chain of Thought (CoT) in real-time.
- 💾 Process-Level State Persistence: The state machine (
AgentSession) automatically serializes conversation snapshots and workspace contexts to/tmp/telegram-sessions/. This ensures seamless session resumption even if the Node/Bun process crashes or the host machine reboots.
Follow this "happy path" to get your personal AI coding assistant running in minutes.
- Create a Telegram Bot: Talk to @BotFather on Telegram, create a new bot, and copy the provided API Token.
- Configure the Environment: Clone this repository. Copy
.env.exampleto.envand fill in yourTELEGRAM_BOT_TOKENand your Telegram User ID inTELEGRAM_ALLOWED_USERS(for security). - Start the Service: Run
bun installfollowed bybun run starton your local machine. - Bind Your Workspace: In Telegram, create a group, add your bot, and send
/bind /absolute/path/to/your/project. - Start Coding: Send a message like "Review the latest changes in src/ and suggest optimizations."
🛠️ Requirements & Technical Details
- Runtime: Bun 1.0 or higher is required for optimal performance and TypeScript execution.
- AI Engines: You must have either Claude Code CLI or Codex CLI installed and authenticated on your local machine.
- External Dependencies:
pdftotext(essential for parsing PDF documents sent via Telegram). Install viabrew install poppleron macOS orapt-get install poppler-utilson Linux.- An OpenAI API Key is highly recommended if you wish to use the voice message transcription feature.
- No Remote Execution: The Telegram Bot API only acts as a message relay. All code execution, file reading, and bash commands are performed entirely on your local machine.
- Path Confinement: The bot strictly validates file paths against the
ALLOWED_PATHSenvironment variable to prevent directory traversal attacks.
💻 Developer Guide
We use Bun for dependency management and execution due to its speed and native TypeScript support.
# Install dependencies
bun install
# Run the bot in development mode (with auto-reload)
bun run dev
# Run TypeScript type checking
bun run typecheck
# Build standalone binary (macOS specific)
bun build --compile- Framework: Built on top of grammY, leveraging its concurrent runner for high-throughput message processing.
- Session Management: Handled by
src/session.ts. Sessions are serialized to JSON and stored in/tmp/telegram-sessions/to ensure continuity across bot restarts or crashes. - Provider Abstraction: The
AgentProviderinterface (src/providers/types.ts) makes it trivial to integrate additional CLI-based AI agents in the future.
To add a new Telegram command, define your handler in src/handlers/commands.ts and register it in src/index.ts using bot.command("your_command", handler).
❓ Troubleshooting
- "Command Not Found" Errors: If the bot complains that
claudeorcodexis missing, ensure these CLIs are installed globally and are accessible within the systemPATHof the user running the bot process. - PDF Extraction Fails: Verify that
pdftotextis installed. Test it manually in your terminal by runningpdftotext -v. - Messages Are Truncated: Telegram enforces a strict character limit per message. The bot automatically splits or truncates excessively long outputs from the AI agents to prevent delivery failures.
- Bot Ignores Messages: Ensure your Telegram User ID is correctly listed in the
TELEGRAM_ALLOWED_USERSarray in your.envfile. The bot silently ignores unauthorized users.
Welcome to submit Issues and Pull Requests! Any questions or suggestions? Please contact Zheyuan (Max) Kong (Carnegie Mellon University, Pittsburgh, PA).
Zheyuan (Max) Kong: kongzheyuan@outlook.com | zheyuank@tepper.cmu.edu

