A cross-platform personal assistant agent inspired by OpenClaw, designed for Windows PCs. It bridges mobile QQ and Windows capabilities so users can use natural language to plan and execute tasks end-to-end across devices.
WeClaw can also be accessed via a CLI, and can be customized to integrate with platforms such as WhatsApp, Discord, Telegram, and more.
- 2026-02-03 🎉 WeClaw is now open source.
- 2026-02-23 🖥️ Added a graphical Agent Console (
angel_console/) for unified operations:- Chat (SSE streaming + ReAct trace)
- Voice input (browser recording + local transcription)
- Search tasks (session retrieval and context navigation)
- Channels (Web / CLI / QQ / Discord)
- Scheduled tasks (Cron) and heartbeat
- Skills management
- Model configuration and switching
- Model billing and call audit (token statistics)
- 2026-03-07 🔌 Added an integrated MCP stack:
- Local and remote MCP client runtime
- Web Console MCP discovery, configuration, and runtime management
- 2026-03-09 🚪 Unified the project startup flow around the Web Console:
- Added
entry_console.pyas the recommended root entrypoint - Added
python -m angel_consolepackage startup
- Added
- Cross-platform personal agent: mobile QQ ↔ Windows PC collaboration
- Multi-agent architecture: parallel decomposition and execution for complex tasks
- Intelligent routing: choose
ReActorReCAPbased on task complexity - Context engineering: compression, unloading, and filesystem support to avoid context overflow
- Secure sandbox and async execution: more stable long-running task pipelines
- Skills integration: unified mechanism supporting customization and extension
- Task state management: supports long-chain, multi-turn task automation
- Stable execution strategy: plan first, then act
- Experience learning is being integrated: improves with usage
🔎 Information Gathering & Report Generation |
⏰ Scheduled Tasks & Automation |
🧩 Automatic Skills Creation |
💻 Coding & Remote Execution |
|---|---|---|---|
- Information gathering and organization
- Report and document generation
- Scheduled tasks and automation
- Cross-device development and execution of coding tasks
The project now includes a local graphical control plane in angel_console/ to manage core agent capabilities in one place.
By default, the console binds to 127.0.0.1 and is intended for local development and operations.
Main modules:
- Chat: session management, streaming responses, and tool trace visualization
- Voice Input: browser-side recording with local speech-to-text (Chinese and English)
- Search Tasks: cross-session retrieval with fast jump to relevant context
- Channels: unified channel configuration and status for Web / CLI / QQ / Discord
- Cron & Heartbeat: periodic jobs, manual triggers, and runtime status controls
- Skills: discover and manage available skills from the workspace
- Models: configure multiple providers/profiles and switch active runtime model
- Model Billing: inspect call volume, token usage, failure rate, and call-level details
The Web Console is now the recommended primary entrypoint for the project.
python entry_console.pyAlternative package-style startup:
python -m angel_consoleThen open http://127.0.0.1:7788 in your browser.
Recommended workflow: start from the Web Console first, then use the Channels page to manage CLI / QQ / Discord.
Direct channel scripts are still supported for advanced use:
python channels/cli.py
python channels/qq.py
python channels/discord.pyLegacy compatibility wrappers are also still available:
python entry_cli.py
python entry_qq.py
python entry_discord.pyYou can direct the agent anytime, anywhere (on the subway, while traveling, or on your bed):
- Fast collection and structured organization of work/study materials
- Automatic generation of reports, checklists, and summaries
- Multi-step tasks that require cross-device collaboration
- Personal development workflows and script-based automation
Environment variables:
LLM_API_KEY(required, for model calls)LLM_BASE_URL(optional; defaults are chosen byLLM_PROVIDER)LLM_MODEL(optional; defaults are chosen byLLM_PROVIDER)LLM_PROVIDER(optional,openai|anthropic|dashscope; can be auto-detected)BRAVE_API_KEY(optional, for web search)ZHIPU_API_KEY(optional, for web search)BOTPY_APPID(required, for QQ entry)BOTPY_SECRET(required, for QQ entry)LITTLE_ANGEL_AGENT_WORKSPACE(optional, workspace path for the agent)
Create local_secrets.yaml in the project root and fill in your keys:
LLM_API_KEY: ""
LLM_BASE_URL: ""
LLM_MODEL: ""
LLM_PROVIDER: ""
ZHIPU_API_KEY: ""
BOTPY_APPID: ""
BOTPY_SECRET: ""Note: Get the QQ bot APPID and SECRET by registering on Tencent QQ Open Platform and creating a bot: https://q.qq.com/#/
python entry_console.pyOr:
python -m angel_consoleOpen http://127.0.0.1:7788 after startup.
python entry_cli.pyDirect channel path:
python channels/cli.pypython entry_qq.pyDirect channel path:
python channels/qq.pypython entry_discord.pyDirect channel path:
python channels/discord.pyentry_qq.py: QQ direct message entryentry_cli.py: CLI entrylittle_angel_bot.py: core bot logictools/: tool capabilitiesskills/: Skills integration
Additional entrypoint files introduced for the Web Console workflow:
entry_console.py: unified root entry for the browser consolechannels/: direct channel entrypoints for CLI / QQ / Discord
- Add or modify skills in
skills/ - Add new tool capabilities in
tools/ - Customize behavior through the unified Skills mechanism
MIT





