██████╗██╗ ██████╗ ██╗ ██╗ ██╔════╝██║ ██╔═████╗██║ ██║ ██║ ██║ ██║██╔██║██║ █╗ ██║ ██║████╗██║ ████╔╝██║██║███╗██║ ╚██████╔╝███████╗╚██████╔╝╚███╔███╔╝ ╚═════╝ ╚══════╝ ╚═════╝ ╚══╝╚══╝
Zero Hassle. Zero Weight. Zero Fee. Agent.
cl0w is a personal AI agent that runs entirely on your own machine — powered by a local LLM via LM Studio and operated through Telegram. No subscriptions, no cloud APIs, no data leaving your device.
It's not a chatbot. It's an agent: it reasons, uses tools via MCP, switches personas, runs reusable skills, and processes files — all from a Telegram message.
| OpenClaw | cl0w | |
|---|---|---|
| Cost | Subscription or API billing | Free forever |
| Privacy | Conversations processed on cloud servers | Never leaves your machine |
| LLM | Vendor-managed cloud model | Your own local model via LM Studio |
| Customization | Web UI or config files | Plain Markdown files — fully yours |
| Tools | Built-in tool ecosystem | Any MCP server you want |
| Interface | CLI / IDE extension | Telegram — works on any device |
| Internet required | Yes | No (after initial setup) |
cl0w integrates with Model Context Protocol (MCP) servers, giving the LLM real tools: file system access, web search, code execution, database queries, and more. Just configure mcp.json and the agent uses tools automatically when needed.
Define multiple AI personalities in plain Markdown. Switch between them with a single command. Each persona is a custom system prompt — you're in full control of how the agent thinks and communicates.
Save reusable prompt templates as Markdown files. Run them as slash commands. Built-in skills include translation, summarization, code review, and concept explanation. Add your own in seconds.
Send a file, get an intelligent response. cl0w handles:
- Images — Vision analysis via multimodal LLM
- PDFs — Full text extraction and Q&A
- Word documents — Content analysis and summarization
- Code files — Review, explanation, refactoring suggestions
- Plain text / CSV / JSON — Any text-based format
- All LLM inference happens on
127.0.0.1— no data leaves your machine - Telegram whitelist: only your user IDs can interact with the bot
- No inbound ports required (Long Polling only)
- API keys stored in
.envandmcp.json, both gitignored by default
cl0w is built with a security-first mindset:
[Your Telegram App]
↕ HTTPS (Telegram servers only)
[cl0w Bot — your machine]
↕ 127.0.0.1 only
[LM Studio — local inference]
| Threat | Mitigation |
|---|---|
| Unauthorized access | Telegram user ID whitelist (ALLOWED_USER_IDS) |
| Data exfiltration | LLM runs locally; no API calls to cloud providers |
| Credential leaks | .env and mcp.json are in .gitignore |
| Network exposure | Long Polling only — zero inbound ports opened |
| Prompt injection via files | File size capped at 20 MB; text truncated at 20,000 chars |
- Python 3.9+
- LM Studio with a model loaded and the local server running
- A Telegram bot token from @BotFather
- Your Telegram user ID (get it from @userinfobot)
git clone https://github.com/yourname/cl0w.git
cd cl0wmacOS / Linux / Git Bash (Windows)
chmod +x setup.sh start.sh
./setup.shWindows (Command Prompt)
setup.batThe setup script:
- Creates a
.venvvirtual environment - Installs all dependencies inside it
- Copies
.env.example→.envandmcp.json.example→mcp.jsonif not present
Edit .env:
TELEGRAM_BOT_TOKEN=your-telegram-bot-token
ALLOWED_USER_IDS=123456789
LM_STUDIO_BASE_URL=http://127.0.0.1:1234/v1
LM_STUDIO_MODEL=local-modelEdit mcp.json to add MCP servers (or leave empty for chat-only mode).
macOS / Linux / Git Bash
./start.shWindows (Command Prompt)
start.batOr manually with the venv activated:
source .venv/bin/activate # macOS/Linux
.venv\Scripts\activate # Windows
python bot.pyOpen Telegram, find your bot, and send /start.
| Command | Description |
|---|---|
/start |
Reset everything (conversation + persona) |
/new |
Clear conversation history (keep current persona) |
/status |
Show current persona, LLM endpoint, MCP server states |
/help |
Full command reference |
| Command | Description |
|---|---|
/persona |
Show active persona |
/persona list |
List all available personas |
/persona set <name> |
Switch persona (resets conversation) |
/persona reset |
Return to default persona |
| Command | Description |
|---|---|
/skill |
List all available skills |
/translate <lang> <text> |
Translate text |
/summarize [text] |
Summarize text or last response |
/review <code> |
Code review |
/explain <topic> |
Explain code or concept |
| Command | Description |
|---|---|
/mcp |
List MCP servers and their tools |
/mcp reload |
Reload mcp.json and restart servers |
You: /translate Japanese Please review the attached document by Friday.
Bot: 金曜日までに添付の書類を確認してください。
Send your .py or .js file with the caption:
"Review this and point out any bugs or security issues."
cl0w reads the file, parses it as code, and returns a structured review with issues, suggestions, and verdict.
Attach a PDF and ask:
"Extract the key decisions and action items from this."
cl0w extracts the full text from the PDF and returns a clean bullet-point summary.
With brave-search MCP configured:
You: What are the latest developments in quantum computing this week?
Bot: [Searches the web automatically, synthesizes results]
No manual search needed — the agent decides when to call the tool.
You: /persona set coder
Bot: Switched to Coder persona. Let's talk code!
You: Refactor this function for readability: [code]
Bot: [Expert code refactoring with explanation]
Send a screenshot or photo:
Caption: "What's wrong with this UI layout?"
cl0w analyzes the image and provides design feedback using the multimodal LLM.
Create skills/standup.md:
---
name: standup
description: Generate a daily standup from bullet points
usage: /standup <bullets>
---
Turn the following bullet points into a professional daily standup update.
Format: Yesterday / Today / Blockers.
{{input}}Then use it:
You: /standup fixed login bug, working on dashboard, waiting for design review
Bot: Yesterday: Fixed the login bug.
Today: Working on the dashboard feature.
Blockers: Awaiting design review feedback.
Create personas/mybot.md:
---
name: mybot
description: My custom assistant personality
---
You are a terse, no-nonsense assistant.
You respond in bullet points only.
You never use filler phrases.Then: /persona set mybot
Edit mcp.json:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/Documents"]
}
}
}Then: /mcp reload
cl0w/
├── bot.py # Telegram bot — all handlers
├── config.py # Configuration loader
├── llm.py # LM Studio API client + tool call loop
├── mcp_client.py # MCP stdio/SSE client
├── persona_manager.py # Persona loader
├── skill_manager.py # Skill loader + template renderer
├── file_handler.py # File parsing (image/PDF/docx/text/code)
├── personas/ # Your personas (gitignored)
├── personas.example/ # Persona examples
├── skills/ # Your skills (gitignored)
├── skills.example/ # Skill examples
├── mcp.json # Your MCP config (gitignored)
├── mcp.json.example # MCP config template
├── .env # Your secrets (gitignored)
└── .env.example # Secrets template
python-telegram-bot==21.1.1
openai==1.52.0
python-dotenv==1.0.1
httpx<0.28.0
pypdf>=4.0.0
python-docx>=1.1.0
MIT
If you find cl0w useful, please consider giving it a ⭐ on GitHub! It helps more people discover the project.