Skip to content

clamguy/clambot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🐚 ClamBot: Secure AI Agent with WASM Sandbox Execution

Version Python License

🐚 ClamBot is a security-focused personal AI assistant that runs all LLM-generated code inside a WASM sandbox (QuickJS inside Wasmtime) β€” eliminating the arbitrary code execution risks of exec()/subprocess.run() patterns common in other agent frameworks.

✨ Inspired by OpenClaw and nanobot.

πŸ”’ Every other agent framework runs LLM-generated code directly on your machine. ClamBot isolates it:

  1. πŸ€– LLM generates a JavaScript "clam" (named, versioned, reusable script)
  2. πŸ“¦ The clam runs inside amla-sandbox (WASM/QuickJS) with memory isolation
  3. βœ… Tool calls yield back to Python for capability-checked, approval-gated dispatch
  4. ♻️ Successful clams are persisted and reused for identical future requests β€” zero latency, zero cost

✨ Key Features

πŸ”’ WASM Sandbox Execution β€” all generated code runs in QuickJS/Wasmtime with memory isolation and no ambient network access

πŸ›‘οΈ Interactive Approval Gate β€” SHA-256 fingerprinted tool approvals with always-grants, turn-scoped grants, and per-tool scope options

♻️ Clam Reuse β€” successful scripts are promoted and reused for identical requests without any LLM call

πŸ”§ Self-Fix Loop β€” up to 3 automatic retries with LLM-guided fix instructions on runtime failures

πŸ€– Multi-Provider LLM β€” OpenRouter, Anthropic, OpenAI, Gemini, DeepSeek, Ollama, OpenAI Codex (OAuth), and custom endpoints

πŸ’¬ Telegram Integration β€” typing indicators, phase status messages, MarkdownV2 rendering, inline approval keyboards, file uploads

🧠 Long-Term Memory β€” MEMORY.md (durable facts auto-injected into prompts) + HISTORY.md (searchable interaction summaries)

⏰ Cron Scheduling β€” persistent timezone-aware jobs with cron, every, and at schedule types

πŸ’“ Heartbeat Service β€” proactive agent wakeup with task-driven execution from HEARTBEAT.md

πŸ”‘ Host-Managed Secrets β€” atomic-write store with 0600 permissions; secrets never appear in tool args, logs, or traces

🌐 SSRF Protection β€” private IP blocking on all outbound HTTP tools

πŸ“ Session Compaction β€” automatic LLM-summarized compaction to prevent context window overflow

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Inbound Sources                                               β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚ πŸ’¬ Telegramβ”‚  β”‚ ⏰ Cron     β”‚  β”‚ πŸ’“ Heartbeatβ”‚ β”‚ πŸ–₯️ CLI   β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β–Ό               β–Ό              β–Ό            β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  πŸŽ›οΈ Gateway Orchestrator                                       β”‚
β”‚  /approve Β· /secret Β· /new command routing                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  🧠 Agent Pipeline                                             β”‚
β”‚                                                                β”‚
β”‚  1. πŸ“‚ Session load + auto-compaction                          β”‚
β”‚  2. πŸ”€ Clam Selector (pre-selection β†’ LLM routing)             β”‚
β”‚  3. ⚑ Clam Generator (LLM β†’ JavaScript)                       β”‚
β”‚  4. πŸ“¦ WASM Runtime (QuickJS sandbox + approval-gated tools)   β”‚
β”‚  5. πŸ” Post-Runtime Analyzer (ACCEPT / SELF_FIX / REJECT)      β”‚
β”‚  6. 🧠 Background memory extraction (fire-and-forget)          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  πŸ“€ Outbound β†’ Telegram / CLI                                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“¦ Install

git clone https://github.com/clamguy/clambot.git
cd clambot
uv venv && uv pip install -e .

πŸš€ Quick Start

Tip

Get API keys: OpenRouter (recommended, access to all models) Β· Anthropic Β· OpenAI

1. 🎬 Initialize β€” auto-discovers API keys from environment and sets up workspace:

# Set your API key (provider auto-detected by onboard)
export OPENROUTER_API_KEY="sk-or-v1-xxx"

# Initialize workspace + config
uv run clambot onboard

uv run clambot onboard scans your environment variables, probes local Ollama, and generates ~/.clambot/config.json with everything it finds. No manual editing needed.

2. βœ… Verify

uv run clambot status

3. πŸ’¬ Chat

uv run clambot agent

That's it! You have a working sandboxed AI assistant in under a minute. πŸŽ‰

Note

If you need to tweak settings later, edit ~/.clambot/config.json β€” see βš™οΈ Configuration below.

πŸ’¬ Telegram

Connect ClamBot to Telegram for a full mobile experience with inline approval buttons, typing indicators, and phase status messages.

1. πŸ€– Create a bot β€” Open Telegram, search @BotFather, send /newbot, follow prompts, copy the token.

2. πŸ”— Connect β€” the interactive command handles everything:

uv run clambot channels connect telegram
# Enter bot token β†’ press "Connect" in bot β†’ user ID auto-added β†’ done!

3. πŸš€ Run the gateway

uv run clambot gateway

That's it β€” message your bot on Telegram and ClamBot responds! πŸŽ‰

πŸ“ Manual configuration (advanced)

If you prefer to configure manually, add the following to ~/.clambot/config.json:

{
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allowFrom": ["YOUR_USER_ID"]
    }
  }
}

allowFrom: Leave empty to allow all users, or add user IDs/usernames to restrict access.

πŸ€– Providers

ClamBot supports multiple LLM backends through a registry-driven provider layer. Set an API key via environment and run uv run clambot onboard β€” the provider is auto-detected.

Provider Purpose Setup
openrouter 🌐 LLM (recommended, access to all models) export OPENROUTER_API_KEY=sk-or-...
anthropic 🧠 LLM (Claude direct) export ANTHROPIC_API_KEY=sk-ant-...
openai πŸ’‘ LLM (GPT direct) export OPENAI_API_KEY=sk-...
deepseek πŸ”¬ LLM (DeepSeek direct) export DEEPSEEK_API_KEY=...
gemini πŸ’Ž LLM (Gemini direct) export GEMINI_API_KEY=...
groq πŸŽ™οΈ LLM + voice transcription (Whisper) export GROQ_API_KEY=...
ollama 🏠 LLM (local, any model) ollama serve (auto-probed)
openai_codex ⚑ LLM (Codex, OAuth) uv run clambot provider login openai-codex
custom πŸ”Œ Any OpenAI-compatible endpoint Config only β€” see below
# Example: set up with OpenRouter
export OPENROUTER_API_KEY="sk-or-v1-xxx"
uv run clambot onboard    # auto-detects provider + model
uv run clambot status     # verify provider is ready βœ…
uv run clambot agent      # start chatting πŸ’¬
⚑ OpenAI Codex (OAuth)

Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.

# 1. Login (opens browser)
uv run clambot provider login openai-codex

# 2. Chat β€” model auto-configured
uv run clambot agent -m "Hello!"
πŸ”Œ Custom Provider (Any OpenAI-compatible API)

Connects directly to any OpenAI-compatible endpoint β€” LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Add to ~/.clambot/config.json:

{
  "providers": {
    "custom": {
      "apiKey": "your-api-key",
      "apiBase": "https://api.your-provider.com/v1"
    }
  },
  "agents": {
    "defaults": {
      "model": "your-model-name"
    }
  }
}

For local servers that don't require a key, set apiKey to any non-empty string (e.g. "no-key").

🏠 Ollama (local)

Start Ollama and let onboard auto-detect it:

# 1. Start Ollama
ollama serve

# 2. Onboard auto-probes Ollama and discovers available models
uv run clambot onboard

# 3. Chat
uv run clambot agent

βš™οΈ Configuration

Config file: ~/.clambot/config.json (auto-generated by uv run clambot onboard)

πŸ“– See docs/configuration.md for the full schema reference.

πŸ”’ Security

Tip

For production deployments, set "restrictToWorkspace": true in your tools config to sandbox file access.

Option Default Description
tools.filesystem.restrictToWorkspace true πŸ“ Restricts filesystem tool to the workspace directory. Prevents path traversal.
security.sslFallbackInsecure false πŸ”“ When true, HTTP tools retry with verify=False on SSL errors. Only for sandboxed environments.
channels.telegram.allowFrom [] (allow all) πŸ‘€ Whitelist of user IDs. Empty = allow everyone.
SSRF protection Always on 🌐 Blocks requests to 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, ::1, fc00::/7
Secret redaction Always on πŸ”‘ Secret values never appear in tool args, events, approval records, or logs

πŸ›‘οΈ Tool Approvals

Every tool call from generated code goes through an approval gate:

πŸ” Tool call arrives
β”œβ”€ βœ… Check always_grants β†’ ALLOW immediately
β”œβ”€ πŸ”„ Check turn-scoped grants β†’ ALLOW if same resource
└─ πŸ™‹ Interactive prompt β†’ Allow Once / Allow Always (scoped) / Reject

Configure pre-approved patterns in ~/.clambot/config.json:

{
  "agents": {
    "approvals": {
      "enabled": true,
      "interactive": true,
      "alwaysGrants": [
        {"tool": "web_fetch", "scope": "host:api.coinbase.com"},
        {"tool": "fs", "scope": "workspace"}
      ]
    }
  }
}

πŸ”Œ MCP (Model Context Protocol)

ClamBot supports MCP β€” connect external tool servers and use them as native agent tools. Add to ~/.clambot/config.json:

{
  "tools": {
    "mcpServers": {
      "filesystem": {
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
      }
    }
  }
}

🧰 Built-In Tools

All tools are callable from generated JavaScript clams via await tool_name({...}).

Tool Description
πŸ“ fs Filesystem operations: read, write, edit, list
🌐 http_request Authenticated HTTP with secret-based bearer tokens
πŸ”— web_fetch URL content fetching
⏰ cron Schedule management: add, list, remove jobs
πŸ”‘ secrets_add Secret storage with multiple resolution sources
🧠 memory_recall Read MEMORY.md durable facts
πŸ” memory_search_history Search HISTORY.md interaction summaries
πŸ“’ echo Debug output tool

πŸ–₯️ CLI Reference

Command Description
uv run clambot onboard 🎬 Initialize config & workspace (auto-detects providers)
uv run clambot agent -m "..." πŸ’¬ Run a single agent turn
uv run clambot agent πŸ”„ Interactive chat mode (REPL)
uv run clambot gateway πŸš€ Start the gateway (Telegram + cron + heartbeat)
uv run clambot status βœ… Show provider readiness
uv run clambot provider login openai-codex πŸ”‘ OAuth login for Codex
uv run clambot channels connect telegram πŸ’¬ Interactive Telegram setup
uv run clambot cron list πŸ“‹ List scheduled jobs
uv run clambot cron add --name "daily" --message "Hello" --cron "0 9 * * *" βž• Add a cron job
uv run clambot cron remove <job_id> ❌ Remove a cron job

Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.

πŸ“ Project Structure

clambot/
β”œβ”€β”€ agent/             # 🧠 Core agent logic (loop, selector, generator, runtime, approvals)
β”‚   β”œβ”€β”€ loop.py        #    Agent pipeline orchestration
β”‚   β”œβ”€β”€ selector.py    #    Two-stage clam routing (pre-selection + LLM)
β”‚   β”œβ”€β”€ generator.py   #    LLM-based JavaScript generation
β”‚   β”œβ”€β”€ runtime.py     #    WASM execution wrapper + timeout/cancellation
β”‚   β”œβ”€β”€ approvals.py   #    Capability-gated approval gate
β”‚   └── tools/         #    Built-in tool implementations
β”œβ”€β”€ bus/               # 🚌 Async message routing (inbound + outbound queues)
β”œβ”€β”€ channels/          # πŸ’¬ Chat channel integrations (Telegram)
β”œβ”€β”€ cli/               # πŸ–₯️ Typer CLI commands
β”œβ”€β”€ config/            # βš™οΈ Config schema (Pydantic) + loader
β”œβ”€β”€ cron/              # ⏰ Persistent timezone-aware job scheduling
β”œβ”€β”€ gateway/           # πŸŽ›οΈ Gateway orchestrator (connects all subsystems)
β”œβ”€β”€ heartbeat/         # πŸ’“ Proactive scheduled agent wakeup
β”œβ”€β”€ memory/            # 🧠 Long-term memory (MEMORY.md + HISTORY.md)
β”œβ”€β”€ providers/         # πŸ€– LLM provider layer (LiteLLM, Codex, custom)
β”œβ”€β”€ session/           # πŸ’¬ Conversation session management (JSONL)
β”œβ”€β”€ tools/             # 🧰 Built-in tool implementations
β”œβ”€β”€ utils/             # πŸ”§ Shared utilities (tracked tasks, text processing)
└── workspace/         # πŸ“‚ Workspace bootstrap + onboarding

πŸ”¬ How It Works

🐚 The Clam Lifecycle

User request: "What is the price of BTC?"
β”‚
β”œβ”€ ♻️ Pre-selection: exact match against existing clams? β†’ YES β†’ reuse (zero LLM cost)
β”‚                                                         β†’ NO  ↓
β”œβ”€ πŸ”€ Selector LLM: generate_new / select_existing / chat
β”‚
β”œβ”€ ⚑ Generator LLM β†’ JavaScript clam:
β”‚   async function run(args) {
β”‚     const res = await http_request({
β”‚       method: "GET",
β”‚       url: "https://api.coinbase.com/v2/prices/BTC-USD/spot"
β”‚     });
β”‚     return JSON.parse(res.content).data;
β”‚   }
β”‚
β”œβ”€ πŸ“¦ WASM Sandbox executes clam
β”‚   └─ http_request β†’ πŸ›‘οΈ Approval Gate β†’ Python host dispatch β†’ result
β”‚
β”œβ”€ πŸ” Post-Runtime Analyzer: ACCEPT β†’ promote to clams/ for future reuse
β”‚                             SELF_FIX β†’ retry with fix instructions (up to 3Γ—)
β”‚                             REJECT β†’ return error
β”‚
└─ πŸ“€ Response delivered β†’ 🧠 background memory extraction (fire-and-forget)

πŸ“¦ WASM Sandbox Model

All LLM-generated code runs inside amla-sandbox:

  • πŸ—οΈ QuickJS JavaScript engine compiled to WebAssembly via Wasmtime
  • πŸ”’ Memory isolation β€” sandbox cannot access host memory
  • 🚫 No ambient network β€” all HTTP goes through approved tool calls
  • βœ… Capability-gated tools β€” each tool call yields to Python for approval
  • ⏱️ Timeout + cancellation β€” configurable limits with graceful shutdown

πŸ“š Documentation

File Contents
docs/architecture.md πŸ—οΈ System architecture, data flow, concurrency model
docs/features.md ✨ All features with implementation details
docs/modules.md πŸ“¦ Complete module list with descriptions
docs/tech-stack.md πŸ”§ Dependencies, versions, external services
docs/configuration.md βš™οΈ Config schema, environment variables, workspace layout
docs/sandbox.md πŸ“¦ WASM execution model, sandbox limitations
docs/telegram-ux.md πŸ’¬ Telegram integration, UX flows
docs/cron.md ⏰ Cron scheduling, job lifecycle

🀝 Contributing

PRs welcome! See CONTRIBUTING.md for dev setup, testing, and code conventions. πŸ€—

# Dev setup
uv venv && uv pip install -e ".[dev]"

# Run tests
uv run pytest tests/ -x -v

# Lint
ruff check . && ruff format --check .

πŸ“„ License

MIT β€” ClamBot Contributors 2026

🐚 ClamBot is for educational, research, and technical exchange purposes.

About

A security-focused personal AI assistant that runs all LLM-generated code inside a WASM sandbox (QuickJS inside Wasmtime)

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages