Live AI coding sessions, in your MacBook Pro dynamic notch.
Always-on HUD over the macOS dynamic notch + menu-bar surface. Shows which sessions are awaiting your input, which are stuck on a tool call, and your top open todos — without flipping windows.
When you hover the notch, you get an expanded HUD with:
- Sessions sidebar (right peek) — every live AI coding agent session your machine is running. Each row shows:
- Outer ring colour = provider (Claude orange-amber, Aider green, Cursor blue, ChatGPT teal-green, custom = grey)
- Inner dot colour = refined status (
awaiting_user_inputorange,tool_pendingblue,crashedred,workinggreen,idlegrey) - Tooltip = provider display name (e.g. "Claude Code")
- Click → opens the orchestrator dashboard tab focused on that pid
- Todo strip (thin row, top or bottom) — top-3 open todos from your Apple Reminders list. Tap = mark complete. Long-press = reassign to a different session.
- Aura + Orb visualisations — voice activity feedback (driven by SSE events).
- Live transcription bubble — local on-device Apple speech recognition during voice input.
The notch widget is built on DynamicNotchKit, the de-facto Swift API for the M3+ MacBook Pro dynamic notch.
It started as Phase 2 of an internal multi-channel router (Jarvis). Extracted because the HUD is genuinely reusable — anyone running multiple AI coding agents in parallel benefits from a unified notch surface, regardless of which backend orchestrates the sessions. Two consumers planned:
- Jarvis Router — current default backend. HTTP at
localhost:3340. - Topics App — future consumer. Will run its own backend.
- Standalone with
agent-conductor— planned for v0.2. The CLI inagent-conductorwill gain awatchsubcommand that streams JSON-Lines on stdout; this app will spawn it as a subprocess. Zero HTTP, zero ports, zero config.
For now, build from source:
git clone https://github.com/zorahrel/agent-notch.git
cd agent-notch
swift build -c release
cp -r .build/release/agent-notch /Applications/agent-notchA signed .app bundle + Homebrew Cask are on the v0.2 roadmap.
The app reads its backend URL from an environment variable.
# Default (matches Jarvis Router setup):
AGENT_NOTCH_BACKEND_URL=http://localhost:3340
# Topics App on a different port:
AGENT_NOTCH_BACKEND_URL=http://localhost:4200
# Remote backend (over an SSH tunnel, e.g.):
AGENT_NOTCH_BACKEND_URL=http://127.0.0.1:9999The backend must expose:
| Endpoint | Method | Purpose |
|---|---|---|
/api/notch/stream |
GET (SSE) |
Push channel for all real-time events (sessions, todos, voice) |
/api/notch/send |
POST |
Submit a user-typed message to the chat backend |
/api/notch/prefs |
GET/POST |
Read & persist user preferences (UI density, theme) |
/api/notch/voice |
POST (multipart) |
Upload an audio blob, returns transcription |
/api/notch/abort |
POST |
Cancel the in-flight LLM call |
/api/local-sessions |
GET |
List live AI coding sessions (drives the sidebar) |
/api/todos/<id> |
GET/PATCH |
Read / reassign a todo |
/api/todos/<id>/complete |
POST |
Mark a todo done |
Reference implementations live in jarvis-claudecode (router/dashboard) and the examples/ folder of agent-conductor.
-
OrchestratorBackendSwift class that spawnsagent-conductor watch --format json-linesand pipes stdout into the event bus -
NotchBackendprotocol so users can pick HTTP vs subprocess in the Preferences pane - Drop the
AGENT_NOTCH_BACKEND_URLenv var as the only config — add a~/Library/Application Support/agent-notch/config.json - Pre-built signed
.appbundle via GitHub Releases
-
ChatProviderprotocol: ClaudeChatProvider, OpenAIChatProvider - Subscription-based auth (Claude.ai web session, ChatGPT web session) — no per-token API keys required for users with a subscription
- Per-provider chat tab in the expanded notch view
- Tool exposure:
agent-conductorprimitives (snapshot,inject,todos) become callable tools that any chat provider can invoke through function calling / MCP
- Homebrew Cask:
brew install --cask agent-notch - Auto-update via Sparkle
- Localisation (it, fr, de, ja, zh)
- Accessibility: VoiceOver labels for every HUD element
- iOS companion (notch on iPhone shows the same view via Bonjour)
- System extensions for non-notch Macs (M1 / Intel / external displays) → menu-bar window equivalent
- Plug-in API: third-party views render as widgets inside the expanded notch
┌─────────────────────────────────────────────────┐
│ agent-notch.app (this repo) │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ NotchController │ ←→ │ NotchEventBus │ │
│ └────────┬─────────┘ └────────┬─────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ DynamicNotchKit │ │ HTTP / SSE │ │
│ │ panels + │ │ client │ │
│ │ SwiftUI views │ │ (Wave 2: │ │
│ └──────────────────┘ │ subprocess too) │ │
│ └──────────────────┘ │
└─────────────────────────────────────────────────┘
│ ▲
│ │
▼ │
┌─────────────────┐ HTTP / SSE │
│ Your backend │ ──────────────┘
│ │
│ jarvis-router │ ← default consumer today
│ topics-app │ ← future consumer
│ agent-conductor│ ← future, via subprocess (Wave 2)
└─────────────────┘
git clone https://github.com/zorahrel/agent-notch.git
cd agent-notch
swift build
swift test
swift run agent-notch # foreground run with logsThe notch will appear as soon as you hover the top centre of your screen. Quit via the menu-bar host or pkill agent-notch.
- No telemetry. No analytics. No network calls except to the backend URL you configure.
- Voice transcription is on-device (Apple's
SFSpeechRecognizerframework). - Audio data is only uploaded to your backend if you explicitly trigger a voice command via hover-record.
- No PII in any committed fixture or test.
See CONTRIBUTING.md. Bug reports and feature requests via GitHub issues.