中文 | English
A desktop pet that monitors your AI coding agents in real-time. Never miss a permission prompt again.
Native Swift + AppKit floating window (~15MB RAM), always on top. Changes mood based on agent activity. Double-click to open the dashboard panel.
AI coding agents like Claude Code, Cursor, and Codex often pause to ask for permission — and if you're browsing the web or watching a video, you'll miss it. Your agent sits idle, waiting. Minutes wasted.
Burro Watch fixes this. When your agent needs authorization, the pet changes to "Waiting" state and sends a macOS notification. Click the notification to jump straight to the terminal. No more context-switching to check if your agent is stuck.
Your pet reflects what the agent is doing — thinking, coding, waiting for permission, or done.
- macOS 12+ (Monterey or later)
- Node.js 20+
- Xcode Command Line Tools (for Swift compiler, auto-builds on first launch)
xcode-select --install # if not already installed
npm (recommended):
npm install -g burro-watch
burro-watchFrom source:
git clone https://github.com/ryan2brich/burro-watch.git
cd burro-watch
npm install
npm run desktopThat's it. The native binary is auto-built on first launch.
| Agent | Terminal |
|---|---|
| Claude Code | Bash terminal, VS Code integrated terminal |
More agents (Cursor, Codex CLI, etc.) are planned — see Roadmap.
- Real-time monitoring of AI coding agent sessions
- Token usage + cost tracking with live progress bars
- 10 pet states reflecting agent activity (Idle, Thinking, Coding, Running, Reading, Waiting, Confused, Error, Success, Overload)
- Anomaly detection — alerts when agent is looping, stuck, or burning tokens
- Permission detection — notifies you when the agent needs authorization, click to jump to terminal
- Zero config — auto-discovers active sessions across all supported agents
- CLI mode — run headless for server-only usage (
burro-watch --headless) - 7+ pet skins — Neko, Robo, Ghost, Fox, Panda, Octopus, Slime + Burro
- i18n — English / Chinese (switch via menu bar icon or
~/.burro-watch/config.json)
Double-click the pet to open the dashboard panel. It displays 5 cards:
The current agent state shown as an emoji + label. 10 possible states:
| State | Meaning |
|---|---|
| 💤 Idle | No activity |
| 🧠 Thinking | LLM generating response |
| ⌨️ Coding | Writing/editing files |
| 🏃 Running | Executing commands |
| 📖 Reading | Reading files |
| ✋ Waiting | Agent waiting for permission |
| ❓ Confused | Agent stuck or looping |
| ❌ Error | Something went wrong |
| ✅ Success | Task completed |
| 🔥 Overload | High token usage (>100k) |
Cumulative API cost for the current session, displayed in USD.
- Progress bar — visual token usage ratio against 200k context window (green -> gold -> red)
- Input / Output / Cache Read — token breakdown
- Turns — number of completed conversation turns
- Model — which model is being used
- Directory — working directory of the agent
- Duration — total session runtime
- Avg Turn — average response time per turn
- Tok/Min — token throughput rate
- Cache Hit — cache effectiveness percentage
- Tools — top 5 most-called tools with counts
burro-watch # Desktop pet (macOS)
burro-watch --headless # Server only, no GUI
burro-watch --port 7778 # Custom port
burro-watch --help # Show all optionsgit clone https://github.com/ryan2brich/burro-watch.git
cd burro-watch
npm install
npm run desktop # Launch the app
npm test # Run tests (153 tests)
npx tsc --noEmit # Type check| Command | Description |
|---|---|
npm run desktop |
Launch the app (auto-builds if needed) |
npm run desktop:build |
Manually rebuild the Swift binary |
npm run desktop:server |
Run the REST API server only |
npm test |
Run tests |
Multi-agent support
- Claude Code
- Codex CLI
- Cursor
- OpenCode / Crush
- Windsurf, Gemini CLI
- Parser plugin system — add new agents via AgentProvider interface
Cross-platform
- Windows (native tray app)
- Linux (Electron or GTK)
Developer features
- REST API documentation
- WebSocket push (replace polling)
- CLI mode (
burro-watch --headless) - Custom alert rules & webhooks
Contributions are welcome! Please open an issue first to discuss what you would like to change.
MIT





