A lean replacement for OpenClaw.
Single binary. 22 tools. Three-tier memory. Telegram + Discord + MCP.
7.5 MB binary · 14 MB RAM · 5,296 lines · 98.9% BFCL · 95.5% T-Eval · 4.3× faster with MoE
Quick Start · Features · Benchmark · Architecture · Roadmap
The idea started with a simple observation: someone rewrote OpenClaw in Go and cut memory usage from 1GB+ down to 35MB. That was impressive. But we asked — could we go further?
Most people don't need 430,000 lines of TypeScript. They need an agent that talks to Telegram, reads their files, runs their code, and opens a GitHub PR when something breaks. That's it.
RustClaw is the 80/20 version of OpenClaw — the features that matter, in a single cargo build.
| RustClaw | OpenClaw | |
| 📦 Binary | 7.5 MB static | requires Node.js 24 + npm |
| 💾 Idle RAM | 14 MB | 1 GB+ |
| ⚡ Startup | < 100 ms | 5–10 s |
| 📝 Code | 5,296 lines | ~430,000 lines |
| 🧠 Memory | Three-tier (vector + graph + history) | Basic session |
| 🔧 Tools | 22 built-in + MCP | Plugin system |
| 🤖 LLM | Anthropic, OpenAI, Ollama, Gemini | OpenAI |
| 📱 Channels | Telegram, Discord, WebSocket | Web UI |
Note
RustClaw is not trying to replace OpenClaw. It's proof that the core of what makes an AI agent useful doesn't require a gigabyte of RAM. It requires good architecture, the right language, and the willingness to start over with clearer constraints.
Built entirely with Claude Code by Ad Huang. Zero human-written code.
🪶 Runs anywhere — 7.5 MB binary, 14 MB RAM. Raspberry Pi, $5 VPS, or your laptop. No Node.js, no Python, no Docker required.
🧠 Remembers everything — Three-tier memory (vector + graph + history) with mixed-mode scoping. Tell the bot your name in Telegram, it remembers in Discord. Facts auto-extracted, contradictions auto-resolved.
🛡️ Safe by design — 14 dangerous command patterns blocked. Tool output truncated. Patch files verified before modification. Error retry with auto-recovery. 120s timeout with graceful fallback.
🔧 Actually does things — 98.9% on the industry-standard BFCL benchmark (1,000 questions). The bot reads your files, runs your commands, creates PRs — it doesn't just describe what it would do.
🔌 MCP-ready — Connect any MCP server. Tools auto-discovered and routed transparently. Your LLM sees one unified tool list — local and remote, no difference.
📈 Benchmarked and proven — 1,000-question BFCL + 2,146-question T-Eval + 500-question internal benchmark. Dual-model strategy: MoE for speed (2.6s/q), dense for accuracy (99.7%).
⚙️ Claude Code inspired — Understand-first tool ordering, history compression, workspace context loading, error retry hints. The same patterns that make Claude Code effective, applied to an open-source agent.
macOS / Linux:
curl -sSL https://raw.githubusercontent.com/Adaimade/RustClaw/main/install.sh | shWindows (PowerShell):
irm https://raw.githubusercontent.com/Adaimade/RustClaw/main/install.ps1 | iexThis downloads the pre-built binary, adds it to PATH, and creates a default config. Works on macOS (Intel/Apple Silicon), Linux (x86/ARM), and Windows.
| Requirement | Install |
|---|---|
| Rust 1.85+ | curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh |
| LLM backend | Ollama, OpenAI, Anthropic, or Gemini |
git clone https://github.com/Adaimade/RustClaw.git && cd RustClaw
cargo build --release && strip target/release/rustclaw
# → target/release/rustclaw (7.5 MB)mkdir -p ~/.rustclaw
cp config.example.toml ~/.rustclaw/config.toml| Ollama (local) | Anthropic | Gemini |
[agent]
provider = "openai"
api_key = "ollama"
base_url = "http://127.0.0.1:11434"
model = "qwen2.5:32b" |
[agent]
provider = "anthropic"
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514" |
[agent]
provider = "openai"
api_key = "your-key"
base_url = "https://generativelanguage.googleapis.com/v1beta/openai"
model = "gemini-2.5-flash" |
Security: RustClaw binds to
0.0.0.0by default for cloud deploy. Never put API keys in code — use~/.rustclaw/config.toml(gitignored) or environment variables (RUSTCLAW__AGENT__API_KEY).
# Start everything (gateway + channels + cron + memory)
rustclaw gateway
# One-shot agent call with tool access
rustclaw agent "List all .rs files and count total lines of code"
# GitHub operations
rustclaw github scan
rustclaw github fix 12322 built-in tools with autonomous execution. Supports Anthropic and OpenAI function calling. Max 10 iterations per request.
Layered tool loading — understand first, then act, then check:
👁️ Understand ⚡ Act 🔍 Check
├── read_file ├── run_command ├── process_check
├── list_dir ├── write_file ├── docker_status
└── search_code └── patch_file ├── system_stats
├── http_ping
💬 Discord (on-demand) 📧 Email (on-demand) ├── pm2_status
├── create/delete channel ├── fetch_inbox └── process_list
├── create_role/set_topic ├── read_email
└── kick/ban_member └── send_email
Safety: 14 dangerous patterns blocked · output truncated to 4000c · patch verification · error retry hints · 120s graceful timeout
Memory is delegated to R-Mem — a separate Rust crate that handles vector recall, fact extraction, contradiction resolution, and entity-relation graphs. RustClaw is a thin wrapper that adds mixed-mode scoping on top.
Mixed-mode recall — three scopes merged on every query:
| Scope | Example | Shared across |
|---|---|---|
| Local | telegram:-100xxx |
Single group |
| User | user:12345 |
All channels for one person |
| Global | global:system |
Everyone |
| Channel | Features |
|---|---|
| Telegram | Long polling · streaming edit · ACL · session history |
| Discord | @mention · server management · scan / fix issue #N / pr status |
| Gateway | OpenClaw-compatible WebSocket on :18789/ws |
[mcp]
servers = [
{ name = "fs", command = "npx @modelcontextprotocol/server-filesystem /tmp" },
]Auto-scan repos · auto-PR from issues · system monitoring alerts · email classification — all scheduled via cron, notifications to Discord.
Tested on the official Gorilla BFCL benchmark — the industry standard for evaluating function calling. Dual-model comparison on Mac Mini 2024 (M4 Pro, 64 GB):
| Test | qwen3-coder:30b (MoE) | qwen2.5:32b (dense) | Speed diff |
|---|---|---|---|
| simple_python (400) | 100% · 1.5s/q | 99.75% · 7.3s/q | 4.9× |
| multiple (200) | 97% · 2.4s/q | 99.5% · 8.4s/q | 3.5× |
| parallel (200) | 99.5% · 2.9s/q | 100% · 12.0s/q | 4.1× |
| parallel_multiple (200) | 98% · 3.4s/q | 100% · 15.7s/q | 4.6× |
| Overall (1,000) | 98.9% · 2.6s/q | 99.7% · 10.8s/q | 4.3× |
MoE model trades -0.8% accuracy for 4.3× speed. Both models exceed 98% across all categories.
Tested on T-Eval — Shanghai AI Lab's tool-use evaluation suite covering planning, retrieval, review, and instruction following:
| Test | Score | Questions | Speed |
|---|---|---|---|
| T-Eval retrieve | 98% (542/553) | 553 | 14.5s/q |
| T-Eval plan | 96% (535/553) | 553 | 25.6s/q |
| T-Eval review | 96% (472/487) | 487 | 3.5s/q |
| T-Eval instruct | 92% (514/553) | 553 | 8.2s/q |
2,146 questions across four core categories. Average 95.5% — strong tool selection, multi-step planning, and self-review.
500-question tool calling benchmark (qwen2.5:32b, local Ollama). Not yet re-tested on qwen3-coder:30b:
| Version | Total | Timeout | Speed |
|---|---|---|---|
| v3 baseline | 81% | 74 | 44s/q |
| v4 timeout fix | 85% | 3 | 36s/q |
| v5 optimized | 97% | 0 | 38s/q |
| Category | v5 Score |
|---|---|
| Core operations | 92% |
| Basic tools | 95% |
| Medium tasks | 100% |
| Advanced reasoning | 98% |
| Hallucination traps | 100% |
| Multi-step chains | 99% |
Benchmark questions available at AI-Bench.
src/
├── main.rs CLI dispatch + startup
├── cli/mod.rs clap subcommands
├── config.rs TOML + env config
├── gateway/ WebSocket server + protocol + handshake
├── agent/runner.rs LLM streaming + agentic loop + history compression
├── channels/ Telegram (teloxide) + Discord (serenity)
├── tools/ 22 tools: fs, shell, search, discord, email, system, github, mcp
├── session/ SessionStore (history) + MemoryManager (R-Mem wrapper)
└── cron/ Scheduled jobs (system, email, GitHub)
27 files · 5,296 lines · 7.5 MB binary · Zero external services
| Status | Feature |
|---|---|
| ✅ | Tool calling (22 tools + agentic loop) |
| ✅ | Three-tier memory (vector + graph + mixed scope) |
| ✅ | Telegram + Discord channels |
| ✅ | MCP client (transparent tool routing) |
| ✅ | GitHub integration (scan + auto-PR) |
| ✅ | System monitoring + cron alerts |
| ✅ | Email (IMAP + SMTP) |
| ✅ | SQLite persistence |
| ✅ | Cross-platform install (macOS / Linux / Windows) |
| ✅ | Multi-model routing (per-channel model override via config) |
| 🔲 | Slack / LINE channels |
| 🔲 | Prometheus metrics |
Community contributions welcome — open an issue or PR.
MIT License · v0.5.0
Created by Ad Huang with Claude Code
The framework is there. The rest is up to the community.