π ClamBot is a security-focused personal AI assistant that runs all LLM-generated code inside a WASM sandbox (QuickJS inside Wasmtime) β eliminating the arbitrary code execution risks of exec()/subprocess.run() patterns common in other agent frameworks.
β¨ Inspired by OpenClaw and nanobot.
π Every other agent framework runs LLM-generated code directly on your machine. ClamBot isolates it:
- π€ LLM generates a JavaScript "clam" (named, versioned, reusable script)
- π¦ The clam runs inside amla-sandbox (WASM/QuickJS) with memory isolation
- β Tool calls yield back to Python for capability-checked, approval-gated dispatch
- β»οΈ Successful clams are persisted and reused for identical future requests β zero latency, zero cost
π WASM Sandbox Execution β all generated code runs in QuickJS/Wasmtime with memory isolation and no ambient network access
π‘οΈ Interactive Approval Gate β SHA-256 fingerprinted tool approvals with always-grants, turn-scoped grants, and per-tool scope options
β»οΈ Clam Reuse β successful scripts are promoted and reused for identical requests without any LLM call
π§ Self-Fix Loop β up to 3 automatic retries with LLM-guided fix instructions on runtime failures
π€ Multi-Provider LLM β OpenRouter, Anthropic, OpenAI, Gemini, DeepSeek, Ollama, OpenAI Codex (OAuth), and custom endpoints
π¬ Telegram Integration β typing indicators, phase status messages, MarkdownV2 rendering, inline approval keyboards, file uploads
π§ Long-Term Memory β MEMORY.md (durable facts auto-injected into prompts) + HISTORY.md (searchable interaction summaries)
β° Cron Scheduling β persistent timezone-aware jobs with cron, every, and at schedule types
π Heartbeat Service β proactive agent wakeup with task-driven execution from HEARTBEAT.md
π Host-Managed Secrets β atomic-write store with 0600 permissions; secrets never appear in tool args, logs, or traces
π SSRF Protection β private IP blocking on all outbound HTTP tools
π Session Compaction β automatic LLM-summarized compaction to prevent context window overflow
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Inbound Sources β
β βββββββββββββ βββββββββββββββ βββββββββββββ ββββββββββββ β
β β π¬ Telegramβ β β° Cron β β π Heartbeatβ β π₯οΈ CLI β β
β βββββββ¬ββββββ ββββββββ¬βββββββ βββββββ¬ββββββ ββββββ¬ββββββ β
ββββββββββΌββββββββββββββββΌβββββββββββββββΌβββββββββββββΌβββββββββββ
βΌ βΌ βΌ βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ποΈ Gateway Orchestrator β
β /approve Β· /secret Β· /new command routing β
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π§ Agent Pipeline β
β β
β 1. π Session load + auto-compaction β
β 2. π Clam Selector (pre-selection β LLM routing) β
β 3. β‘ Clam Generator (LLM β JavaScript) β
β 4. π¦ WASM Runtime (QuickJS sandbox + approval-gated tools) β
β 5. π Post-Runtime Analyzer (ACCEPT / SELF_FIX / REJECT) β
β 6. π§ Background memory extraction (fire-and-forget) β
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π€ Outbound β Telegram / CLI β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
git clone https://github.com/clamguy/clambot.git
cd clambot
uv venv && uv pip install -e .Tip
Get API keys: OpenRouter (recommended, access to all models) Β· Anthropic Β· OpenAI
1. π¬ Initialize β auto-discovers API keys from environment and sets up workspace:
# Set your API key (provider auto-detected by onboard)
export OPENROUTER_API_KEY="sk-or-v1-xxx"
# Initialize workspace + config
uv run clambot onboarduv run clambot onboard scans your environment variables, probes local Ollama, and generates ~/.clambot/config.json with everything it finds. No manual editing needed.
2. β Verify
uv run clambot status3. π¬ Chat
uv run clambot agentThat's it! You have a working sandboxed AI assistant in under a minute. π
Note
If you need to tweak settings later, edit ~/.clambot/config.json β see βοΈ Configuration below.
Connect ClamBot to Telegram for a full mobile experience with inline approval buttons, typing indicators, and phase status messages.
1. π€ Create a bot β Open Telegram, search @BotFather, send /newbot, follow prompts, copy the token.
2. π Connect β the interactive command handles everything:
uv run clambot channels connect telegram
# Enter bot token β press "Connect" in bot β user ID auto-added β done!3. π Run the gateway
uv run clambot gatewayThat's it β message your bot on Telegram and ClamBot responds! π
π Manual configuration (advanced)
If you prefer to configure manually, add the following to ~/.clambot/config.json:
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
allowFrom: Leave empty to allow all users, or add user IDs/usernames to restrict access.
ClamBot supports multiple LLM backends through a registry-driven provider layer. Set an API key via environment and run uv run clambot onboard β the provider is auto-detected.
| Provider | Purpose | Setup |
|---|---|---|
openrouter |
π LLM (recommended, access to all models) | export OPENROUTER_API_KEY=sk-or-... |
anthropic |
π§ LLM (Claude direct) | export ANTHROPIC_API_KEY=sk-ant-... |
openai |
π‘ LLM (GPT direct) | export OPENAI_API_KEY=sk-... |
deepseek |
π¬ LLM (DeepSeek direct) | export DEEPSEEK_API_KEY=... |
gemini |
π LLM (Gemini direct) | export GEMINI_API_KEY=... |
groq |
ποΈ LLM + voice transcription (Whisper) | export GROQ_API_KEY=... |
ollama |
π LLM (local, any model) | ollama serve (auto-probed) |
openai_codex |
β‘ LLM (Codex, OAuth) | uv run clambot provider login openai-codex |
custom |
π Any OpenAI-compatible endpoint | Config only β see below |
# Example: set up with OpenRouter
export OPENROUTER_API_KEY="sk-or-v1-xxx"
uv run clambot onboard # auto-detects provider + model
uv run clambot status # verify provider is ready β
uv run clambot agent # start chatting π¬β‘ OpenAI Codex (OAuth)
Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.
# 1. Login (opens browser)
uv run clambot provider login openai-codex
# 2. Chat β model auto-configured
uv run clambot agent -m "Hello!"π Custom Provider (Any OpenAI-compatible API)
Connects directly to any OpenAI-compatible endpoint β LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Add to ~/.clambot/config.json:
{
"providers": {
"custom": {
"apiKey": "your-api-key",
"apiBase": "https://api.your-provider.com/v1"
}
},
"agents": {
"defaults": {
"model": "your-model-name"
}
}
}For local servers that don't require a key, set
apiKeyto any non-empty string (e.g."no-key").
π Ollama (local)
Start Ollama and let onboard auto-detect it:
# 1. Start Ollama
ollama serve
# 2. Onboard auto-probes Ollama and discovers available models
uv run clambot onboard
# 3. Chat
uv run clambot agentConfig file: ~/.clambot/config.json (auto-generated by uv run clambot onboard)
π See docs/configuration.md for the full schema reference.
Tip
For production deployments, set "restrictToWorkspace": true in your tools config to sandbox file access.
| Option | Default | Description |
|---|---|---|
tools.filesystem.restrictToWorkspace |
true |
π Restricts filesystem tool to the workspace directory. Prevents path traversal. |
security.sslFallbackInsecure |
false |
π When true, HTTP tools retry with verify=False on SSL errors. Only for sandboxed environments. |
channels.telegram.allowFrom |
[] (allow all) |
π€ Whitelist of user IDs. Empty = allow everyone. |
| SSRF protection | Always on | π Blocks requests to 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, ::1, fc00::/7 |
| Secret redaction | Always on | π Secret values never appear in tool args, events, approval records, or logs |
Every tool call from generated code goes through an approval gate:
π Tool call arrives
ββ β
Check always_grants β ALLOW immediately
ββ π Check turn-scoped grants β ALLOW if same resource
ββ π Interactive prompt β Allow Once / Allow Always (scoped) / Reject
Configure pre-approved patterns in ~/.clambot/config.json:
{
"agents": {
"approvals": {
"enabled": true,
"interactive": true,
"alwaysGrants": [
{"tool": "web_fetch", "scope": "host:api.coinbase.com"},
{"tool": "fs", "scope": "workspace"}
]
}
}
}ClamBot supports MCP β connect external tool servers and use them as native agent tools. Add to ~/.clambot/config.json:
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
}
}
}
}All tools are callable from generated JavaScript clams via await tool_name({...}).
| Tool | Description |
|---|---|
π fs |
Filesystem operations: read, write, edit, list |
π http_request |
Authenticated HTTP with secret-based bearer tokens |
π web_fetch |
URL content fetching |
β° cron |
Schedule management: add, list, remove jobs |
π secrets_add |
Secret storage with multiple resolution sources |
π§ memory_recall |
Read MEMORY.md durable facts |
π memory_search_history |
Search HISTORY.md interaction summaries |
π’ echo |
Debug output tool |
| Command | Description |
|---|---|
uv run clambot onboard |
π¬ Initialize config & workspace (auto-detects providers) |
uv run clambot agent -m "..." |
π¬ Run a single agent turn |
uv run clambot agent |
π Interactive chat mode (REPL) |
uv run clambot gateway |
π Start the gateway (Telegram + cron + heartbeat) |
uv run clambot status |
β Show provider readiness |
uv run clambot provider login openai-codex |
π OAuth login for Codex |
uv run clambot channels connect telegram |
π¬ Interactive Telegram setup |
uv run clambot cron list |
π List scheduled jobs |
uv run clambot cron add --name "daily" --message "Hello" --cron "0 9 * * *" |
β Add a cron job |
uv run clambot cron remove <job_id> |
β Remove a cron job |
Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.
clambot/
βββ agent/ # π§ Core agent logic (loop, selector, generator, runtime, approvals)
β βββ loop.py # Agent pipeline orchestration
β βββ selector.py # Two-stage clam routing (pre-selection + LLM)
β βββ generator.py # LLM-based JavaScript generation
β βββ runtime.py # WASM execution wrapper + timeout/cancellation
β βββ approvals.py # Capability-gated approval gate
β βββ tools/ # Built-in tool implementations
βββ bus/ # π Async message routing (inbound + outbound queues)
βββ channels/ # π¬ Chat channel integrations (Telegram)
βββ cli/ # π₯οΈ Typer CLI commands
βββ config/ # βοΈ Config schema (Pydantic) + loader
βββ cron/ # β° Persistent timezone-aware job scheduling
βββ gateway/ # ποΈ Gateway orchestrator (connects all subsystems)
βββ heartbeat/ # π Proactive scheduled agent wakeup
βββ memory/ # π§ Long-term memory (MEMORY.md + HISTORY.md)
βββ providers/ # π€ LLM provider layer (LiteLLM, Codex, custom)
βββ session/ # π¬ Conversation session management (JSONL)
βββ tools/ # π§° Built-in tool implementations
βββ utils/ # π§ Shared utilities (tracked tasks, text processing)
βββ workspace/ # π Workspace bootstrap + onboarding
User request: "What is the price of BTC?"
β
ββ β»οΈ Pre-selection: exact match against existing clams? β YES β reuse (zero LLM cost)
β β NO β
ββ π Selector LLM: generate_new / select_existing / chat
β
ββ β‘ Generator LLM β JavaScript clam:
β async function run(args) {
β const res = await http_request({
β method: "GET",
β url: "https://api.coinbase.com/v2/prices/BTC-USD/spot"
β });
β return JSON.parse(res.content).data;
β }
β
ββ π¦ WASM Sandbox executes clam
β ββ http_request β π‘οΈ Approval Gate β Python host dispatch β result
β
ββ π Post-Runtime Analyzer: ACCEPT β promote to clams/ for future reuse
β SELF_FIX β retry with fix instructions (up to 3Γ)
β REJECT β return error
β
ββ π€ Response delivered β π§ background memory extraction (fire-and-forget)
All LLM-generated code runs inside amla-sandbox:
- ποΈ QuickJS JavaScript engine compiled to WebAssembly via Wasmtime
- π Memory isolation β sandbox cannot access host memory
- π« No ambient network β all HTTP goes through approved tool calls
- β Capability-gated tools β each tool call yields to Python for approval
- β±οΈ Timeout + cancellation β configurable limits with graceful shutdown
| File | Contents |
|---|---|
| docs/architecture.md | ποΈ System architecture, data flow, concurrency model |
| docs/features.md | β¨ All features with implementation details |
| docs/modules.md | π¦ Complete module list with descriptions |
| docs/tech-stack.md | π§ Dependencies, versions, external services |
| docs/configuration.md | βοΈ Config schema, environment variables, workspace layout |
| docs/sandbox.md | π¦ WASM execution model, sandbox limitations |
| docs/telegram-ux.md | π¬ Telegram integration, UX flows |
| docs/cron.md | β° Cron scheduling, job lifecycle |
PRs welcome! See CONTRIBUTING.md for dev setup, testing, and code conventions. π€
# Dev setup
uv venv && uv pip install -e ".[dev]"
# Run tests
uv run pytest tests/ -x -v
# Lint
ruff check . && ruff format --check .MIT β ClamBot Contributors 2026
π ClamBot is for educational, research, and technical exchange purposes.