中文 | 日本語 | Português | Tiếng Việt | Français | Italiano | Bahasa Indonesia | English
PicoClaw is an independent open-source project initiated by Sipeed. It is written entirely in Go — not a fork of OpenClaw, NanoBot, or any other project.
🦐 PicoClaw is an ultra-lightweight personal AI Assistant inspired by NanoBot, refactored from the ground up in Go through a self-bootstrapping process, where the AI agent itself drove the entire architectural migration and code optimization.
⚡️ Runs on $10 hardware with <10MB RAM: That's 99% less memory than OpenClaw and 98% cheaper than a Mac mini!
Caution
🚨 SECURITY & OFFICIAL CHANNELS / 安全声明
-
NO CRYPTO: PicoClaw has NO official token/coin. All claims on
pump.funor other trading platforms are SCAMS. -
OFFICIAL DOMAIN: The ONLY official website is picoclaw.io, and company website is sipeed.com
-
Warning: Many
.ai/.org/.com/.net/...domains are registered by third parties. -
Warning: picoclaw is in early development now and may have unresolved network security issues. Do not deploy to production environments before the v1.0 release.
-
Note: picoclaw has recently merged a lot of PRs, which may result in a larger memory footprint (10–20MB) in the latest versions. We plan to prioritize resource optimization as soon as the current feature set reaches a stable state.
2026-03-17 🚀 v0.2.3 Released! System tray UI (Windows & Linux), sub-agent status tracking (spawn_status), experimental gateway hot-reload, cron security gates, and 2 security fixes. PicoClaw now at 25K ⭐!
2026-03-09 🎉 v0.2.1 — Biggest update yet! MCP protocol support, 4 new channels (Matrix/IRC/WeCom/Discord Proxy), 3 new providers (Kimi/Minimax/Avian), vision pipeline, JSONL memory store, and model routing.
2026-02-28 📦 v0.2.0 released with Docker Compose support and Web UI launcher.
2026-02-26 🎉 PicoClaw hit 20K stars in just 17 days! Channel auto-orchestration and capability interfaces landed.
Older news...
2026-02-16 🎉 PicoClaw hit 12K stars in one week! Community maintainer roles and roadmap officially posted.
2026-02-13 🎉 PicoClaw hit 5000 stars in 4 days! Project Roadmap and Developer Group setup underway.
2026-02-09 🎉 PicoClaw Launched! Built in 1 day to bring AI Agents to $10 hardware with <10MB RAM. 🦐 PicoClaw,Let's Go!
🪶 Ultra-Lightweight: <10MB Memory footprint — 99% smaller than OpenClaw core functionality.*
💰 Minimal Cost: Efficient enough to run on $10 Hardware — 98% cheaper than a Mac mini.
⚡️ Lightning Fast: 400X Faster startup time, boot in <1 second even on 0.6GHz single core.
🌍 True Portability: Single self-contained binary across RISC-V, ARM, MIPS, and x86, One-click to Go!
🤖 AI-Bootstrapped: Autonomous Go-native implementation — 95% Agent-generated core with human-in-the-loop refinement.
🔌 MCP Support: Native Model Context Protocol integration — connect any MCP server to extend agent capabilities.
👁️ Vision Pipeline: Send images and files directly to the agent — automatic base64 encoding for multimodal LLMs.
🧠 Smart Routing: Rule-based model routing — simple queries go to lightweight models, saving API costs.
*Recent versions may use 10–20MB due to rapid feature merges. Resource optimization is planned. Startup comparison based on 0.8GHz single-core benchmarks (see table below).
| OpenClaw | NanoBot | PicoClaw | |
|---|---|---|---|
| Language | TypeScript | Python | Go |
| RAM | >1GB | >100MB | < 10MB* |
| Startup (0.8GHz core) |
>500s | >30s | <1s |
| Cost | Mac Mini $599 | Most Linux SBC ~$50 |
Any Linux Board As low as $10 |
📋 Hardware Compatibility List — See all tested boards, from $5 RISC-V to Raspberry Pi to Android phones. Your board not listed? Submit a PR!
🧩 Full-Stack Engineer |
🗂️ Logging & Planning Management |
🔎 Web Search & Learning |
|---|---|---|
| Develop • Deploy • Scale | Schedule • Automate • Memory | Discovery • Insights • Trends |
Give your decade-old phone a second life! Turn it into a smart AI Assistant with PicoClaw. Quick Start:
- Install Termux (Download from GitHub Releases, or search in F-Droid / Google Play).
- Execute cmds
# Download the latest release from https://github.com/sipeed/picoclaw/releases
wget https://github.com/sipeed/picoclaw/releases/latest/download/picoclaw_Linux_arm64.tar.gz
tar xzf picoclaw_Linux_arm64.tar.gz
pkg install proot
termux-chroot ./picoclaw onboard # chroot provides a standard Linux filesystem layoutAnd then follow the instructions in the "Quick Start" section to complete the configuration!
PicoClaw can be deployed on almost any Linux device!
- $9.9 LicheeRV-Nano E(Ethernet) or W(WiFi6) version, for Minimal Home Assistant
- $30~50 NanoKVM, or $100 NanoKVM-Pro for Automated Server Maintenance
- $50 MaixCAM or $100 MaixCAM2 for Smart Monitoring
picoclaw_detect_person.mp4
🌟 More Deployment Cases Await!
Visit picoclaw.io — the official website auto-detects your platform and provides one-click download. No need to manually pick an architecture.
Alternatively, download the binary for your platform from the GitHub Releases page.
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
make deps
# Build, no need to install
make build
# Build for multiple platforms
make build-all
# Build for Raspberry Pi Zero 2 W (32-bit: make build-linux-arm; 64-bit: make build-linux-arm64)
make build-pi-zero
# Build And Install
make installRaspberry Pi Zero 2 W: Use the binary that matches your OS: 32-bit Raspberry Pi OS → make build-linux-arm; 64-bit → make build-linux-arm64. Or run make build-pi-zero to build both.
For detailed guides, see the docs below. The README covers quick start only.
# 1. Clone this repo
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
# 2. First run — auto-generates docker/data/config.json then exits
docker compose -f docker/docker-compose.yml --profile gateway up
# The container prints "First-run setup complete." and stops.
# 3. Set your API keys
vim docker/data/config.json # Set provider API keys, bot tokens, etc.
# 4. Start
docker compose -f docker/docker-compose.yml --profile gateway up -dTip
Docker Users: By default, the Gateway listens on 127.0.0.1 which is not accessible from the host. If you need to access the health endpoints or expose ports, set PICOCLAW_GATEWAY_HOST=0.0.0.0 in your environment or update config.json.
# 5. Check logs
docker compose -f docker/docker-compose.yml logs -f picoclaw-gateway
# 6. Stop
docker compose -f docker/docker-compose.yml --profile gateway downThe launcher image includes all three binaries (picoclaw, picoclaw-launcher, picoclaw-launcher-tui) and starts the web console by default, which provides a browser-based UI for configuration and chat.
docker compose -f docker/docker-compose.yml --profile launcher up -dOpen http://localhost:18800 in your browser. The launcher manages the gateway process automatically.
Warning
The web console does not yet support authentication. Avoid exposing it to the public internet.
# Ask a question
docker compose -f docker/docker-compose.yml run --rm picoclaw-agent -m "What is 2+2?"
# Interactive mode
docker compose -f docker/docker-compose.yml run --rm picoclaw-agentdocker compose -f docker/docker-compose.yml pull
docker compose -f docker/docker-compose.yml --profile gateway up -dTip
Set your API Key in ~/.picoclaw/config.json. Get API Keys: Volcengine (CodingPlan) (LLM) · OpenRouter (LLM) · Zhipu (LLM). Web search is optional — get a free Tavily API (1000 free queries/month) or Brave Search API (2000 free queries/month).
1. Initialize
picoclaw onboard2. Configure (~/.picoclaw/config.json)
{
"agents": {
"defaults": {
"workspace": "~/.picoclaw/workspace",
"model_name": "gpt-5.4",
"max_tokens": 8192,
"temperature": 0.7,
"max_tool_iterations": 20
}
},
"model_list": [
{
"model_name": "ark-code-latest",
"model": "volcengine/ark-code-latest",
"api_key": "sk-your-api-key"
},
{
"model_name": "gpt-5.4",
"model": "openai/gpt-5.4",
"api_key": "your-api-key",
"request_timeout": 300
},
{
"model_name": "claude-sonnet-4.6",
"model": "anthropic/claude-sonnet-4.6",
"api_key": "your-anthropic-key"
}
],
"tools": {
"web": {
"brave": {
"enabled": false,
"api_key": "YOUR_BRAVE_API_KEY",
"max_results": 5
},
"tavily": {
"enabled": false,
"api_key": "YOUR_TAVILY_API_KEY",
"max_results": 5
},
"duckduckgo": {
"enabled": true,
"max_results": 5
},
"perplexity": {
"enabled": false,
"api_key": "YOUR_PERPLEXITY_API_KEY",
"max_results": 5
},
"searxng": {
"enabled": false,
"base_url": "http://your-searxng-instance:8888",
"max_results": 5
}
}
}
}New: The
model_listconfiguration format allows zero-code provider addition. See Model Configuration for details.request_timeoutis optional and uses seconds. If omitted or set to<= 0, PicoClaw uses the default timeout (120s).
3. Get API Keys
- LLM Provider: OpenRouter · Zhipu · Anthropic · OpenAI · Gemini
- Web Search (optional):
- Brave Search - Paid ($5/1000 queries, ~$5-6/month)
- Perplexity - AI-powered search with chat interface
- SearXNG - Self-hosted metasearch engine (free, no API key needed)
- Tavily - Optimized for AI Agents (1000 requests/month)
- DuckDuckGo - Built-in fallback (no API key required)
Note: See
config.example.jsonfor a complete configuration template.
4. Chat
picoclaw agent -m "What is 2+2?"That's it! You have a working AI assistant in 2 minutes.
Talk to your picoclaw through Telegram, Discord, WhatsApp, Matrix, QQ, DingTalk, LINE, or WeCom
Note: All webhook-based channels (LINE, WeCom, etc.) are served on a single shared Gateway HTTP server (
gateway.host:gateway.port, default127.0.0.1:18790). There are no per-channel ports to configure. Note: Feishu uses WebSocket/SDK mode and does not use the shared HTTP webhook server.
| Channel | Setup |
|---|---|
| Telegram | Easy (just a token) |
| Discord | Easy (bot token + intents) |
| Easy (native: QR scan; or bridge URL) | |
| Weixin | Easy (Native QR scan) |
| Matrix | Medium (homeserver + bot access token) |
| Easy (AppID + AppSecret) | |
| DingTalk | Medium (app credentials) |
| LINE | Medium (credentials + webhook URL) |
| WeCom AI Bot | Medium (Token + AES key) |
Telegram (Recommended)
1. Create a bot
- Open Telegram, search
@BotFather - Send
/newbot, follow prompts - Copy the token
2. Configure
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allow_from": ["YOUR_USER_ID"]
}
}
}Get your user ID from
@userinfoboton Telegram.
3. Run
picoclaw gateway4. Telegram command menu (auto-registered at startup)
PicoClaw now keeps command definitions in one shared registry. On startup, Telegram will automatically register supported bot commands (for example /start, /help, /show, /list) so command menu and runtime behavior stay in sync.
Telegram command menu registration remains channel-local discovery UX; generic command execution is handled centrally in the agent loop via the commands executor.
If command registration fails (network/API transient errors), the channel still starts and PicoClaw retries registration in the background.
Discord
1. Create a bot
- Go to https://discord.com/developers/applications
- Create an application → Bot → Add Bot
- Copy the bot token
2. Enable intents
- In the Bot settings, enable MESSAGE CONTENT INTENT
- (Optional) Enable SERVER MEMBERS INTENT if you plan to use allow lists based on member data
3. Get your User ID
- Discord Settings → Advanced → enable Developer Mode
- Right-click your avatar → Copy User ID
4. Configure
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allow_from": ["YOUR_USER_ID"]
}
}
}5. Invite the bot
- OAuth2 → URL Generator
- Scopes:
bot - Bot Permissions:
Send Messages,Read Message History - Open the generated invite URL and add the bot to your server
Optional: Group trigger mode
By default the bot responds to all messages in a server channel. To restrict responses to @-mentions only, add:
{
"channels": {
"discord": {
"group_trigger": { "mention_only": true }
}
}
}You can also trigger by keyword prefixes (e.g. !bot):
{
"channels": {
"discord": {
"group_trigger": { "prefixes": ["!bot"] }
}
}
}6. Run
picoclaw gatewayWhatsApp (native via whatsmeow)
PicoClaw can connect to WhatsApp in two ways:
- Native (recommended): In-process using whatsmeow. No separate bridge. Set
"use_native": trueand leavebridge_urlempty. On first run, scan the QR code with WhatsApp (Linked Devices). Session is stored under your workspace (e.g.workspace/whatsapp/). The native channel is optional to keep the default binary small; build with-tags whatsapp_native(e.g.make build-whatsapp-nativeorgo build -tags whatsapp_native ./cmd/...). - Bridge: Connect to an external WebSocket bridge. Set
bridge_url(e.g.ws://localhost:3001) and keepuse_nativefalse.
Configure (native)
{
"channels": {
"whatsapp": {
"enabled": true,
"use_native": true,
"session_store_path": "",
"allow_from": []
}
}
}If session_store_path is empty, the session is stored in <workspace>/whatsapp/. Run picoclaw gateway; on first run, scan the QR code printed in the terminal with WhatsApp → Linked Devices.
Weixin (WeChat Personal)
PicoClaw supports connecting to your personal WeChat account using the official Tencent iLink API.
1. Login Run the interactive QR login flow:
picoclaw onboard weixinScan the printed QR code with your WeChat mobile app. On success, the token is saved to your config.
2. Configure
(Optional) Update allow_from with your WeChat User ID to restrict who can message the bot:
{
"channels": {
"weixin": {
"enabled": true,
"token": "YOUR_TOKEN",
"allow_from": ["YOUR_USER_ID"]
}
}
}3. Run
picoclaw gateway1. Create a bot
- Go to QQ Open Platform
- Create an application → Get AppID and AppSecret
2. Configure
{
"channels": {
"qq": {
"enabled": true,
"app_id": "YOUR_APP_ID",
"app_secret": "YOUR_APP_SECRET",
"allow_from": []
}
}
}Set
allow_fromto empty to allow all users, or specify QQ numbers to restrict access.
3. Run
picoclaw gatewayDingTalk
1. Create a bot
- Go to Open Platform
- Create an internal app
- Copy Client ID and Client Secret
2. Configure
{
"channels": {
"dingtalk": {
"enabled": true,
"client_id": "YOUR_CLIENT_ID",
"client_secret": "YOUR_CLIENT_SECRET",
"allow_from": []
}
}
}Set
allow_fromto empty to allow all users, or specify DingTalk user IDs to restrict access.
3. Run
picoclaw gatewayMatrix
1. Prepare bot account
- Use your preferred homeserver (e.g.
https://matrix.orgor self-hosted) - Create a bot user and obtain its access token
2. Configure
{
"channels": {
"matrix": {
"enabled": true,
"homeserver": "https://matrix.org",
"user_id": "@your-bot:matrix.org",
"access_token": "YOUR_MATRIX_ACCESS_TOKEN",
"allow_from": []
}
}
}3. Run
picoclaw gatewayFor full options (device_id, join_on_invite, group_trigger, placeholder, reasoning_channel_id), see Matrix Channel Configuration Guide.
LINE
1. Create a LINE Official Account
- Go to LINE Developers Console
- Create a provider → Create a Messaging API channel
- Copy Channel Secret and Channel Access Token
2. Configure
{
"channels": {
"line": {
"enabled": true,
"channel_secret": "YOUR_CHANNEL_SECRET",
"channel_access_token": "YOUR_CHANNEL_ACCESS_TOKEN",
"webhook_path": "/webhook/line",
"allow_from": []
}
}
}LINE webhook is served on the shared Gateway server (
gateway.host:gateway.port, default127.0.0.1:18790).
3. Set up Webhook URL
LINE requires HTTPS for webhooks. Use a reverse proxy or tunnel:
# Example with ngrok (gateway default port is 18790)
ngrok http 18790Then set the Webhook URL in LINE Developers Console to https://your-domain/webhook/line and enable Use webhook.
4. Run
picoclaw gatewayIn group chats, the bot responds only when @mentioned. Replies quote the original message.
WeCom (企业微信)
PicoClaw supports three types of WeCom integration:
Option 1: WeCom Bot (Bot) - Easier setup, supports group chats Option 2: WeCom App (Custom App) - More features, proactive messaging, private chat only Option 3: WeCom AI Bot (AI Bot) - Official AI Bot, streaming replies, supports group & private chat
See WeCom AI Bot Configuration Guide for detailed setup instructions.
Quick Setup - WeCom AI Bot:
1. Create an AI Bot
- Go to WeCom Admin Console → AI Bot
- Create a new AI Bot → Set name, avatar, etc.
- Copy Bot ID and Secret
2. Configure
{
"channels": {
"wecom_aibot": {
"enabled": true,
"bot_id": "YOUR_BOT_ID",
"secret": "YOUR_SECRET",
"allow_from": [],
"welcome_message": "Hello! How can I help you?"
}
}
}3. Run
picoclaw gatewayNote: WeCom AI Bot uses streaming pull protocol — no reply timeout concerns. Long tasks (>30 seconds) automatically switch to
response_urlpush delivery.
Connect Picoclaw to the Agent Social Network simply by sending a single message via the CLI or any integrated Chat App.
Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat.ai
Config file: ~/.picoclaw/config.json
You can override default paths using environment variables. This is useful for portable installations, containerized deployments, or running picoclaw as a system service. These variables are independent and control different paths.
| Variable | Description | Default Path |
|---|---|---|
PICOCLAW_CONFIG |
Overrides the path to the configuration file. This directly tells picoclaw which config.json to load, ignoring all other locations. |
~/.picoclaw/config.json |
PICOCLAW_HOME |
Overrides the root directory for picoclaw data. This changes the default location of the workspace and other data directories. |
~/.picoclaw |
Examples:
# Run picoclaw using a specific config file
# The workspace path will be read from within that config file
PICOCLAW_CONFIG=/etc/picoclaw/production.json picoclaw gateway
# Run picoclaw with all its data stored in /opt/picoclaw
# Config will be loaded from the default ~/.picoclaw/config.json
# Workspace will be created at /opt/picoclaw/workspace
PICOCLAW_HOME=/opt/picoclaw picoclaw agent
# Use both for a fully customized setup
PICOCLAW_HOME=/srv/picoclaw PICOCLAW_CONFIG=/srv/picoclaw/main.json picoclaw gatewayPicoClaw stores data in your configured workspace (default: ~/.picoclaw/workspace):
~/.picoclaw/workspace/
├── sessions/ # Conversation sessions and history
├── memory/ # Long-term memory (MEMORY.md)
├── state/ # Persistent state (last channel, etc.)
├── cron/ # Scheduled jobs database
├── skills/ # Workspace-specific skills
├── AGENT.md # Structured agent definition and system prompt
├── SOUL.md # Agent soul
├── USER.md # User profile and preferences for this workspace
├── HEARTBEAT.md # Periodic task prompts (checked every 30 min)
└── ...
By default, skills are loaded from:
~/.picoclaw/workspace/skills(workspace)~/.picoclaw/skills(global)<current-working-directory>/skills(builtin)
For advanced/test setups, you can override the builtin skills root with:
export PICOCLAW_BUILTIN_SKILLS=/path/to/skills- Generic slash commands are executed through a single path in
pkg/agent/loop.goviacommands.Executor. - Channel adapters no longer consume generic commands locally; they forward inbound text to the bus/agent path. Telegram still auto-registers supported commands at startup.
- Unknown slash command (for example
/foo) passes through to normal LLM processing. - Registered but unsupported command on the current channel (for example
/showon WhatsApp) returns an explicit user-facing error and stops further processing.
PicoClaw runs in a sandboxed environment by default. The agent can only access files and execute commands within the configured workspace.
{
"agents": {
"defaults": {
"workspace": "~/.picoclaw/workspace",
"restrict_to_workspace": true
}
}
}| Option | Default | Description |
|---|---|---|
workspace |
~/.picoclaw/workspace |
Working directory for the agent |
restrict_to_workspace |
true |
Restrict file/command access to workspace |
When restrict_to_workspace: true, the following tools are sandboxed:
| Tool | Function | Restriction |
|---|---|---|
read_file |
Read files | Only files within workspace |
write_file |
Write files | Only files within workspace |
list_dir |
List directories | Only directories within workspace |
edit_file |
Edit files | Only files within workspace |
append_file |
Append to files | Only files within workspace |
exec |
Execute commands | Command paths must be within workspace |
Even with restrict_to_workspace: false, the exec tool blocks these dangerous commands:
rm -rf,del /f,rmdir /s— Bulk deletionformat,mkfs,diskpart— Disk formattingdd if=— Disk imaging- Writing to
/dev/sd[a-z]— Direct disk writes shutdown,reboot,poweroff— System shutdown- Fork bomb
:(){ :|:& };:
[ERROR] tool: Tool execution failed
{tool=exec, error=Command blocked by safety guard (path outside working dir)}
[ERROR] tool: Tool execution failed
{tool=exec, error=Command blocked by safety guard (dangerous pattern detected)}
If you need the agent to access paths outside the workspace:
Method 1: Config file
{
"agents": {
"defaults": {
"restrict_to_workspace": false
}
}
}Method 2: Environment variable
export PICOCLAW_AGENTS_DEFAULTS_RESTRICT_TO_WORKSPACE=false
⚠️ Warning: Disabling this restriction allows the agent to access any path on your system. Use with caution in controlled environments only.
The restrict_to_workspace setting applies consistently across all execution paths:
| Execution Path | Security Boundary |
|---|---|
| Main Agent | restrict_to_workspace ✅ |
| Subagent / Spawn | Inherits same restriction ✅ |
| Heartbeat tasks | Inherits same restriction ✅ |
All paths share the same workspace restriction — there's no way to bypass the security boundary through subagents or scheduled tasks.
PicoClaw can perform periodic tasks automatically. Create a HEARTBEAT.md file in your workspace:
# Periodic Tasks
- Check my email for important messages
- Review my calendar for upcoming events
- Check the weather forecastThe agent will read this file every 30 minutes (configurable) and execute any tasks using available tools.
For long-running tasks (web search, API calls), use the spawn tool to create a subagent:
# Periodic Tasks
## Quick Tasks (respond directly)
- Report current time
## Long Tasks (use spawn for async)
- Search the web for AI news and summarize
- Check email and report important messagesKey behaviors:
| Feature | Description |
|---|---|
| spawn | Creates async subagent, doesn't block heartbeat |
| Independent context | Subagent has its own context, no session history |
| message tool | Subagent communicates with user directly via message tool |
| Non-blocking | After spawning, heartbeat continues to next task |
Heartbeat triggers
↓
Agent reads HEARTBEAT.md
↓
For long task: spawn subagent
↓ ↓
Continue to next task Subagent works independently
↓ ↓
All tasks done Subagent uses "message" tool
↓ ↓
Respond HEARTBEAT_OK User receives result directly
The subagent has access to tools (message, web_search, etc.) and can communicate with the user independently without going through the main agent.
Configuration:
{
"heartbeat": {
"enabled": true,
"interval": 30
}
}| Option | Default | Description |
|---|---|---|
enabled |
true |
Enable/disable heartbeat |
interval |
30 |
Check interval in minutes (min: 5) |
Environment variables:
PICOCLAW_HEARTBEAT_ENABLED=falseto disablePICOCLAW_HEARTBEAT_INTERVAL=60to change interval
Note
Groq provides free voice transcription via Whisper. If configured, audio messages from any channel will be automatically transcribed at the agent level.
| Provider | Purpose | Get API Key |
|---|---|---|
gemini |
LLM (Gemini direct) | aistudio.google.com |
zhipu |
LLM (Zhipu direct) | bigmodel.cn |
volcengine |
LLM(Volcengine direct) | volcengine.com |
openrouter |
LLM (recommended, access to all models) | openrouter.ai |
anthropic |
LLM (Claude direct) | console.anthropic.com |
openai |
LLM (GPT direct) | platform.openai.com |
deepseek |
LLM (DeepSeek direct) | platform.deepseek.com |
qwen |
LLM (Qwen direct) | dashscope.console.aliyun.com |
groq |
LLM + Voice transcription (Whisper) | console.groq.com |
cerebras |
LLM (Cerebras direct) | cerebras.ai |
vivgrid |
LLM (Vivgrid direct) | vivgrid.com |
What's New? PicoClaw now uses a model-centric configuration approach. Simply specify
vendor/modelformat (e.g.,zhipu/glm-4.7) to add new providers—zero code changes required!
This design also enables multi-agent support with flexible provider selection:
- Different agents, different providers: Each agent can use its own LLM provider
- Model fallbacks: Configure primary and fallback models for resilience
- Load balancing: Distribute requests across multiple endpoints
- Centralized configuration: Manage all providers in one place
| Vendor | model Prefix |
Default API Base | Protocol | API Key |
|---|---|---|---|---|
| OpenAI | openai/ |
https://api.openai.com/v1 |
OpenAI | Get Key |
| Anthropic | anthropic/ |
https://api.anthropic.com/v1 |
Anthropic | Get Key |
| 智谱 AI (GLM) | zhipu/ |
https://open.bigmodel.cn/api/paas/v4 |
OpenAI | Get Key |
| DeepSeek | deepseek/ |
https://api.deepseek.com/v1 |
OpenAI | Get Key |
| Google Gemini | gemini/ |
https://generativelanguage.googleapis.com/v1beta |
OpenAI | Get Key |
| Groq | groq/ |
https://api.groq.com/openai/v1 |
OpenAI | Get Key |
| Moonshot | moonshot/ |
https://api.moonshot.cn/v1 |
OpenAI | Get Key |
| 通义千问 (Qwen) | qwen/ |
https://dashscope.aliyuncs.com/compatible-mode/v1 |
OpenAI | Get Key |
| NVIDIA | nvidia/ |
https://integrate.api.nvidia.com/v1 |
OpenAI | Get Key |
| Ollama | ollama/ |
http://localhost:11434/v1 |
OpenAI | Local (no key needed) |
| OpenRouter | openrouter/ |
https://openrouter.ai/api/v1 |
OpenAI | Get Key |
| LiteLLM Proxy | litellm/ |
http://localhost:4000/v1 |
OpenAI | Your LiteLLM proxy key |
| VLLM | vllm/ |
http://localhost:8000/v1 |
OpenAI | Local |
| Cerebras | cerebras/ |
https://api.cerebras.ai/v1 |
OpenAI | Get Key |
| VolcEngine (Doubao) | volcengine/ |
https://ark.cn-beijing.volces.com/api/v3 |
OpenAI | Get Key |
| 神算云 | shengsuanyun/ |
https://router.shengsuanyun.com/api/v1 |
OpenAI | - |
| BytePlus | byteplus/ |
https://ark.ap-southeast.bytepluses.com/api/v3 |
OpenAI | Get Key |
| Vivgrid | vivgrid/ |
https://api.vivgrid.com/v1 |
OpenAI | Get Key |
| LongCat | longcat/ |
https://api.longcat.chat/openai |
OpenAI | Get Key |
| ModelScope (魔搭) | modelscope/ |
https://api-inference.modelscope.cn/v1 |
OpenAI | Get Token |
| Antigravity | antigravity/ |
Google Cloud | Custom | OAuth only |
| GitHub Copilot | github-copilot/ |
localhost:4321 |
gRPC | - |
{
"model_list": [
{
"model_name": "ark-code-latest",
"model": "volcengine/ark-code-latest",
"api_key": "sk-your-api-key"
},
{
"model_name": "gpt-5.4",
"model": "openai/gpt-5.4",
"api_key": "sk-your-openai-key"
},
{
"model_name": "claude-sonnet-4.6",
"model": "anthropic/claude-sonnet-4.6",
"api_key": "sk-ant-your-key"
},
{
"model_name": "glm-4.7",
"model": "zhipu/glm-4.7",
"api_key": "your-zhipu-key"
}
],
"agents": {
"defaults": {
"model": "gpt-5.4"
}
}
}OpenAI
{
"model_name": "gpt-5.4",
"model": "openai/gpt-5.4",
"api_key": "sk-..."
}VolcEngine (Doubao)
{
"model_name": "ark-code-latest",
"model": "volcengine/ark-code-latest",
"api_key": "sk-..."
}智谱 AI (GLM)
{
"model_name": "glm-4.7",
"model": "zhipu/glm-4.7",
"api_key": "your-key"
}DeepSeek
{
"model_name": "deepseek-chat",
"model": "deepseek/deepseek-chat",
"api_key": "sk-..."
}Anthropic (with API key)
{
"model_name": "claude-sonnet-4.6",
"model": "anthropic/claude-sonnet-4.6",
"api_key": "sk-ant-your-key"
}Run
picoclaw auth login --provider anthropicto paste your API token.
Anthropic Messages API (native format)
For direct Anthropic API access or custom endpoints that only support Anthropic's native message format:
{
"model_name": "claude-opus-4-6",
"model": "anthropic-messages/claude-opus-4-6",
"api_key": "sk-ant-your-key",
"api_base": "https://api.anthropic.com"
}Use
anthropic-messagesprotocol when:
- Using third-party proxies that only support Anthropic's native
/v1/messagesendpoint (not OpenAI-compatible/v1/chat/completions)- Connecting to services like MiniMax, Synthetic that require Anthropic's native message format
- The existing
anthropicprotocol returns 404 errors (indicating the endpoint doesn't support OpenAI-compatible format)Note: The
anthropicprotocol uses OpenAI-compatible format (/v1/chat/completions), whileanthropic-messagesuses Anthropic's native format (/v1/messages). Choose based on your endpoint's supported format.
Ollama (local)
{
"model_name": "llama3",
"model": "ollama/llama3"
}Custom Proxy/API
{
"model_name": "my-custom-model",
"model": "openai/custom-model",
"api_base": "https://my-proxy.com/v1",
"api_key": "sk-...",
"request_timeout": 300
}LiteLLM Proxy
{
"model_name": "lite-gpt4",
"model": "litellm/lite-gpt4",
"api_base": "http://localhost:4000/v1",
"api_key": "sk-..."
}PicoClaw strips only the outer litellm/ prefix before sending the request, so proxy aliases like litellm/lite-gpt4 send lite-gpt4, while litellm/openai/gpt-4o sends openai/gpt-4o.
Configure multiple endpoints for the same model name—PicoClaw will automatically round-robin between them:
{
"model_list": [
{
"model_name": "gpt-5.4",
"model": "openai/gpt-5.4",
"api_base": "https://api1.example.com/v1",
"api_key": "sk-key1"
},
{
"model_name": "gpt-5.4",
"model": "openai/gpt-5.4",
"api_base": "https://api2.example.com/v1",
"api_key": "sk-key2"
}
]
}The old providers configuration is deprecated but still supported for backward compatibility.
Old Config (deprecated):
{
"providers": {
"zhipu": {
"api_key": "your-key",
"api_base": "https://open.bigmodel.cn/api/paas/v4"
}
},
"agents": {
"defaults": {
"provider": "zhipu",
"model": "glm-4.7"
}
}
}New Config (recommended):
{
"model_list": [
{
"model_name": "glm-4.7",
"model": "zhipu/glm-4.7",
"api_key": "your-key"
}
],
"agents": {
"defaults": {
"model": "glm-4.7"
}
}
}For detailed migration guide, see docs/migration/model-list-migration.md.
PicoClaw routes providers by protocol family:
- OpenAI-compatible protocol: OpenRouter, OpenAI-compatible gateways, Groq, Zhipu, and vLLM-style endpoints.
- Anthropic protocol: Claude-native API behavior.
- Codex/OAuth path: OpenAI OAuth/token authentication route.
This keeps the runtime lightweight while making new OpenAI-compatible backends mostly a config operation (api_base + api_key).
Zhipu
1. Get API key and base URL
- Get API key
2. Configure
{
"agents": {
"defaults": {
"workspace": "~/.picoclaw/workspace",
"model": "glm-4.7",
"max_tokens": 8192,
"temperature": 0.7,
"max_tool_iterations": 20
}
},
"providers": {
"zhipu": {
"api_key": "Your API Key",
"api_base": "https://open.bigmodel.cn/api/paas/v4"
}
}
}3. Run
picoclaw agent -m "Hello"Full config example
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
},
"session": {
"dm_scope": "per-channel-peer",
"backlog_limit": 20
},
"providers": {
"openrouter": {
"api_key": "sk-or-v1-xxx"
},
"groq": {
"api_key": "gsk_xxx"
}
},
"channels": {
"telegram": {
"enabled": true,
"token": "123456:ABC...",
"allow_from": ["123456789"]
},
"discord": {
"enabled": true,
"token": "",
"allow_from": [""]
},
"whatsapp": {
"enabled": false,
"bridge_url": "ws://localhost:3001",
"use_native": false,
"session_store_path": "",
"allow_from": []
},
"feishu": {
"enabled": false,
"app_id": "cli_xxx",
"app_secret": "xxx",
"encrypt_key": "",
"verification_token": "",
"allow_from": []
},
"qq": {
"enabled": false,
"app_id": "",
"app_secret": "",
"allow_from": []
}
},
"tools": {
"web": {
"brave": {
"enabled": false,
"api_key": "BSA...",
"max_results": 5
},
"duckduckgo": {
"enabled": true,
"max_results": 5
},
"perplexity": {
"enabled": false,
"api_key": "",
"max_results": 5
},
"searxng": {
"enabled": false,
"base_url": "http://localhost:8888",
"max_results": 5
}
},
"cron": {
"exec_timeout_minutes": 5
}
},
"heartbeat": {
"enabled": true,
"interval": 30
}
}| Command | Description |
|---|---|
picoclaw onboard |
Initialize config & workspace |
picoclaw onboard weixin |
Connect WeChat account via QR |
picoclaw agent -m "..." |
Chat with the agent |
picoclaw agent |
Interactive chat mode |
picoclaw gateway |
Start the gateway |
picoclaw status |
Show status |
picoclaw version |
Show version info |
picoclaw model |
Show or change default model |
picoclaw cron list |
List all scheduled jobs |
picoclaw cron add ... |
Add a scheduled job |
picoclaw cron disable |
Disable a scheduled job |
picoclaw cron remove |
Remove a scheduled job |
picoclaw skills list |
List installed skills |
picoclaw skills install |
Install a skill |
picoclaw migrate |
Migrate data from older versions |
picoclaw auth login |
Authenticate with providers |
PicoClaw supports scheduled reminders and recurring tasks through the cron tool:
- One-time reminders: "Remind me in 10 minutes" → triggers once after 10min
- Recurring tasks: "Remind me every 2 hours" → triggers every 2 hours
- Cron expressions: "Remind me at 9am daily" → uses cron expression
PRs welcome! The codebase is intentionally small and readable. 🤗
See our full Community Roadmap.
Developer group building, join after your first merged PR!
User Groups:
discord: https://discord.gg/V4sAZ9XWpN







