A self-hosted, multi-agent service framework built in Rust.
Agents as services, not scripts.
ClawParty is a production-grade agent hosting framework that turns LLM agents into always-on, multi-channel services. It is designed from day one as a persistent service β it runs 24/7, connects to messaging platforms like Telegram, and manages multiple independent conversations concurrently.
π¬ Multi-conversation via group chats |
π Interruptible turns with live feedback |
βοΈ TUI config editor |
There are several ways to run LLM agents today. Here's where ClawParty fits:
| Claude Code | OpenClaw | ClawParty | |
|---|---|---|---|
| What is it | Anthropic's official CLI coding agent | Open-source personal AI assistant with massive channel ecosystem | Self-hosted multi-agent service framework |
| Primary use case | Interactive coding in a terminal | Personal automation across 90+ messaging platforms and companion apps | Developer / engineering workflows as always-on services |
| Stack | Closed-source, Node.js | Open-source, TypeScript/Node.js (~1M LoC) | Open-source, Rust (~54K LoC) |
| Runtime model | CLI process, one session at a time | WebSocket Gateway daemon + Node runner; companion apps (macOS, iOS, Android) + ACP bridge for IDE integration | systemd daemon, single binary, multi-conversation |
| Channels | Terminal only | 90+ extensions (Telegram, WhatsApp, Discord, Slack, Feishu, Teams, IRC, β¦) | Telegram, DingTalk, CLI β extensible |
| Agent topology | Single agent | Multi-agent: subagent registry, skill-based routing, agent scopes | Main + sub-agents + background agents, per-conversation with configurable sinks |
| Interruption | Kill & restart | chat.abort cancels active runs |
Mid-turn yield at safe tool boundaries: new messages interrupt gracefully, no work lost, agent sees new context |
| Sandbox | OS-level only | Docker containers with remote FS bridge, SSH backend, network modes | Three modes: disabled / subprocess / bubblewrap (Linux namespace isolation, no Docker dependency) |
| Memory | Session-scoped | Context engine with compaction, session transcripts, QMD memory format | Multi-layer: threshold / idle / timeout-observation compaction + conversation memory files + shared user/identity profiles |
| Scheduling | None | Cron, webhooks, wakeups | Cron with checker commands + configurable sink routing (direct, broadcast, multi-target) |
| Skills / Plugins | β | 53 built-in skills + 5,400+ via ClawHub + MCP server support | SKILL.md-based reusable workflows with runtime change detection + persistent skill memory |
| Model flexibility | Claude only | Multi-provider (Anthropic, OpenAI, Google, DeepSeek, vLLM, Groq, β¦) with model fallback chains | Multiple providers per instance (OpenRouter, Codex WS, custom), per-conversation model switching |
| Coding tool depth | Deep (native) | Full tool suite (bash, file I/O, web, subagents, image, TTS, video, MCP tools, canvas) | Deep: 40+ built-in tools (file I/O, shell with PTY, grep/glob, patch, sub-agents, cron, workspace management) |
| Config UX | CLI flags | YAML config + TUI wizard + Control UI web panel | JSON config + built-in Ratatui TUI editor with one-key bootstrap |
| Codebase weight | Closed | ~6K source files, ~1M lines, 90+ extensions, 4 companion apps | 2 crates, ~50 source files, ~54K lines β single binary, no runtime dependencies |
TL;DR: Claude Code is the polished single-user coding CLI. OpenClaw is the full-featured personal assistant platform with massive channel and plugin coverage, companion apps, and IDE integration. ClawParty is a lean, Rust-native service focused on engineering workflows β with deep coding tools, crash-safe state persistence, cooperative turn interruption, and the ability to run multiple isolated agent conversations as a single lightweight daemon.
- Service-first: Agents run as daemons, not interactive CLI programs
- Conversation = Group Chat: Each Telegram group with the bot is an independent conversation with its own workspace, model, and agent state
- Graceful interruption: User messages yield running turns at safe boundaries β no lost work
- Crash-safe: All state is persisted; process restarts resume where they left off
Traditional CLI agents are blocking β once a task starts, you wait silently until it finishes, or kill it and lose everything. ClawParty solves this with two key mechanisms:
π Interruptible tools β When you send a new message while the agent is working, ClawParty yields the current turn at a safe boundary. The agent sees your new input, adjusts its plan, and continues β no work is lost, no restart needed.
π¬ user_tell β mid-turn progress messages β The agent can push status updates to you while still working, as separate chat bubbles. You see what's happening in real time instead of staring at a spinner.
Together, these turn a one-shot request-response pattern into a continuous, collaborative conversation β even during long-running tasks like code generation, web research, or multi-file refactoring.
Real interaction: user sends follow-up instructions mid-task β agent acknowledges immediately and adapts
Each Telegram group chat with the bot creates an independent conversation with its own:
- Workspace (isolated filesystem)
- Model & agent backend selection
- Session history & memory
- Sandbox mode
Create a new group, add the bot, and you have a fresh conversation β no commands needed. Each conversation is fully isolated β different groups can use different models, sandboxes, and skill sets simultaneously.
ClawParty handles multimodal content natively across the full pipeline β not bolted on, but designed into the channel and tool layers from the start:
| Capability | How it works |
|---|---|
| π· Image input | Send images in chat β model sees them directly via vision-capable models |
| π¨ Image generation | image_generate tool β helper model creates images β delivered back in chat |
| π PDF input | Send PDF files β content extracted and passed to the model |
| π΅ Audio input | Send voice messages β transcribed and injected into context |
| π File attachments | Upload any file β stored in workspace, accessible to all tools |
| πΌοΈ Image output | Agents generate plots, diagrams, screenshots β sent as chat attachments |
Multimodal routing is per-model configurable: each model declares its capabilities (e.g., image_in, audio_in), and helper models can be assigned for specific tooling tasks (image generation, web search, etc.). The channel layer handles format conversion transparently β the agent just works.
40+ built-in tools available to every agent, covering:
| Category | Tools |
|---|---|
| File I/O | file_read, file_write, edit, apply_patch |
| Repository exploration | glob, grep, ls |
| Shell execution | exec_start, exec_observe, exec_wait, exec_kill (with PTY support) |
| Web | web_fetch, web_search (interruptible) |
| Image | image_generate, image_load |
| Downloads | file_download_start, file_download_progress, file_download_wait, file_download_cancel |
| Agent coordination | subagent_start, subagent_join, subagent_kill, start_background_agent |
| Scheduling | create_cron_task, update_cron_task, remove_cron_task, list_cron_tasks |
| Memory & workspace | workspaces_list, workspace_mount, workspace_content_move, shared_profile_upload |
| Skills | skill_load, skill_create, skill_update |
| Communication | user_tell (mid-turn progress messages) |
Tools are classified as immediate (return promptly) or interruptible (can be yielded when a new user message arrives). Long-running exec, file_download, and image tasks survive across turns and context compactions.
Each conversation supports a three-tier agent hierarchy:
Conversation
ββ Main Foreground Agent (user-facing, one active turn at a time)
ββ Sub-Agents (session-bound helpers, delegated tasks)
ββ Background Agents (independent async work, report back via sinks)
- Main agent handles user messages, runs tools, manages workspace
- Sub-agents run bounded tasks in parallel (e.g., search, fact gathering)
- Background agents run independently and deliver results later via configurable sinks (direct message, broadcast topic, multi-target fan-out)
ClawParty implements multi-layer context management to handle long-running conversations:
| Layer | Mechanism |
|---|---|
| Threshold compaction | Automatic compression when context approaches model limits |
| Idle compaction | Background compression between turns when conversation is idle |
| Timeout-observation compaction | Compress and retry when model times out on large context |
| High-fidelity zone | Recent messages preserved at full detail during compaction |
| Conversation memory | MEMORY.json + rollout summaries for cross-session recall |
| Shared profiles | USER.md / IDENTITY.md injected into every system prompt |
| Runtime change detection | Profile updates, skill changes, model catalog changes β synthetic system messages |
Running exec processes, active downloads, and alive sub-agents are preserved in compaction summaries so subsequent turns can continue using them.
Skills are SKILL.md-based reusable workflows:
.skills/
ββ web-report-deploy/
ββ SKILL.md # Instructions + trigger description
ββ references/ # Reference files
ββ scripts/ # Helper scripts
ββ assets/ # Static assets
- Discovery: Skill metadata is preloaded; agent loads full instructions on demand
- Persistence:
skill_create/skill_updatepersist skills to the runtime store - Shared state:
.skill_memory/<skill-name>/for cross-workspace persistent data - Runtime sync: Description or content changes trigger automatic notifications to the agent
Three isolation levels, configurable per conversation via /sandbox:
| Mode | Isolation | Use case |
|---|---|---|
disabled |
None | Trusted environments, development |
subprocess |
Separate process | Basic isolation |
bubblewrap |
Linux namespace container | Production β restricted filesystem, network-aware |
Bubblewrap mode exposes only the current workspace, runtime dir, .skills/, and .skill_memory/. DNS is forwarded, read-only mounts are cleaned up on turn completion.
ClawParty ships with a built-in terminal UI for editing configurations β no need to hand-edit JSON:
./target/release/partyclaw config deploy_telegram.jsonSections include Models, Tooling, Main Agent, Runtime, Sandbox, and Channels. Supports keyboard navigation, inline validation (v), and one-key bootstrap (b) for new configs.
cp .env.example .env
# Fill in:
# OPENROUTER_API_KEY=sk-or-...
# TELEGRAM_BOT_TOKEN=... (for Telegram channel)# Build
cargo build --release --manifest-path agent_host/Cargo.toml --bin partyclaw
# Run with config
./target/release/partyclaw --config config.json --workdir ./workdir{
"version": "0.14",
"models": {
"main": {
"type": "openrouter",
"api_endpoint": "https://openrouter.ai/api/v1",
"model": "anthropic/claude-sonnet-4",
"capabilities": ["chat", "image_in"],
"api_key_env": "OPENROUTER_API_KEY",
"context_window_tokens": 200000,
"description": "Primary chat model"
}
},
"agent": {
"agent_frame": { "available_models": ["main"] }
},
"main_agent": { "language": "zh-CN" },
"channels": [{
"kind": "telegram",
"id": "telegram-main",
"bot_token_env": "TELEGRAM_BOT_TOKEN"
}]
}./target/release/partyclaw setup --config config.json --workdir ./workdir
# Generates systemd user unit files, then:
systemctl --user enable --now partyclaw| Command | Description |
|---|---|
/agent |
Select agent backend & model |
/status |
Token usage, cache stats, cost estimation |
/compact |
One-off context compaction |
/compact_mode |
Toggle automatic compaction |
/sandbox |
Switch sandbox mode |
/think |
Toggle extended thinking |
/set_api_timeout |
Adjust per-request timeout |
/continue |
Resume an interrupted turn |
/snapsave /snapload /snaplist |
Conversation state snapshots |
/help |
Show available commands |
| Trigger | Action |
|---|---|
| Push / Pull Request | cargo fmt --check + cargo test for both crates |
VERSION changed on main |
Auto-tag vX.Y.Z + publish release binaries |
Built with π¦ Rust Β· Powered by LLMs Β· Agents as Services

