No Electron. No subscription. No cloud. Just Swift.
AI invisible to screen sharing — Zoom, Teams, Meet, OBS.
Every other Cluely alternative is built on Electron or Tauri — that means a bundled Chromium browser, 200–500MB of RAM just to open, and slow startup times.
Ghostbar is written in native Swift using AppKit. It starts instantly, uses minimal memory, and feels like a real macOS app — because it is one.
| Ghostbar | Pluely | Natively | Vysper | |
|---|---|---|---|---|
| Tech | ✅ Native Swift | Tauri/Rust | Electron | Electron |
| macOS only | ✅ Purpose-built | ❌ Cross-platform | ❌ Cross-platform | ❌ Cross-platform |
| Ollama / local LLMs | ✅ | ✅ | ❌ | |
| Free | ✅ Always | ✅ | ✅ Free tier | ✅ |
| No subscription | ✅ | ✅ | ✅ | |
| Open source | ✅ MIT | ✅ | ✅ | ✅ |
| No telemetry | ✅ Zero | ❓ | ||
| Voice input | ✅ On-device | ❌ | ✅ | ❌ |
| App size | ✅ ~5MB | ~10MB | ~150MB | ~200MB |
A macOS menu bar AI client that is completely invisible to screen capture.
Zoom, Google Meet, Microsoft Teams, OBS, QuickTime, Cmd+Shift+5 — none of them see it. It only exists on your physical display.
// The entire secret. One native AppKit API.
window.sharingType = .noneNo hacks. No injection. A public, documented Apple API that removes the window from the display capture pipeline before any recording tool can touch it.
Technical interviews
LeetCode, HackerRank, take-home assessments. Ask for hints, complexity analysis, edge cases — all while sharing your screen. The interviewer sees your code. They don't see Ghostbar.
Live coding demos
Presenting to a client or team? Use AI to look up syntax, generate boilerplate, or sanity-check logic in real time. Nobody notices.
Work calls & meetings
Prepare answers on the fly. Summarize what was just said. Draft a response before you speak.
System design interviews
Ask for architecture patterns, trade-offs, scalability approaches instantly.
Local & private
Pair with Ollama or LM Studio for fully on-device inference. Nothing leaves your machine.
| Tool | Status |
|---|---|
| Zoom | ✅ Invisible |
| Google Meet (Chrome) | ✅ Invisible |
| Microsoft Teams | ✅ Invisible |
| OBS Studio | ✅ Invisible |
| QuickTime screen recording | ✅ Invisible |
| macOS Cmd+Shift+5 | ✅ Invisible |
| 🫥 Invisible by default | Hidden from every screen capture tool on macOS |
| 🍎 Menu bar only | No Dock icon. No trace. Appears as ⬡ |
| 🤖 Multi-backend | Ollama, OpenAI, Claude, OpenRouter, NVIDIA NIM, LM Studio, llama.cpp |
| 🎙 Voice input | On-device transcription via whisper-cpp |
| 📸 Screenshot analysis | Capture your screen → attach to message. Model sees it, recorder doesn't. |
| 🔒 Zero telemetry | No analytics, no cloud, no tracking |
| ⚡ Instant startup | Native Swift — no Chromium, no Electron overhead |
Option A — DMG (fastest)
Download Ghostbar-v1.0.1.dmg, drag to Applications, open.
Option B — Build from source
# Requirements: macOS 13+, Xcode Command Line Tools
xcode-select --install
git clone https://github.com/rbc33/Ghostbar.git && cd Ghostbar
bash build.sh
open Ghostbar.appThe ⬡ icon appears in your menu bar. Click → Open chat.
macOS exposes a window-level API that controls whether a window participates in the display server's capture pipeline:
NSWindow.sharingType = .none // excluded from all capture
NSWindow.sharingType = .readOnly // default — visible to recordersSetting .none tells the macOS compositor to exclude the window from all capture operations before any recording application, screenshot tool, or API (CGWindowListCreateImage, SCStreamConfiguration, etc.) can observe it.
The window renders normally on your physical display. It simply does not exist to capture pipelines.
Open ⬡ → Settings… and pick your backend:
| Backend | URL | Notes |
|---|---|---|
| Ollama | http://localhost:11434 |
Local, no key |
| OpenAI | https://api.openai.com/v1 |
API key required |
| Anthropic | https://api.anthropic.com/v1 |
API key required |
| OpenRouter | https://openrouter.ai/api/v1 |
API key required |
| NVIDIA NIM | https://integrate.api.nvidia.com/v1 |
Free tier available |
| LM Studio | http://localhost:1234/v1 |
Local, no key |
| llama.cpp | http://localhost:8080/v1 |
Local, no key |
Any OpenAI-compatible server works — select OpenAI and set the URL.
- Go to build.nvidia.com → sign in → any model → Get API Key
- Key starts with
nvapi- - In Ghostbar: select NVIDIA NIM, paste key
Hundreds of models (Llama, Mistral, Gemma, Qwen, DeepSeek) — no credit card required.
brew install whisper-cpp
curl -L -o /opt/homebrew/share/whisper-cpp/ggml-base.bin \
https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.binPress 🎙 or ⌘⌥ → speak → press again → transcribed and sent. Fully on-device.
| Model | Size | Quality |
|---|---|---|
tiny |
75 MB | Fast |
base |
150 MB | Balanced ✓ |
small |
470 MB | Better |
medium |
1.5 GB | Best |
⌘⇧ → captures screen → attaches to next message.
The model sees your screen. The screen recorder doesn't see Ghostbar.
Works with any vision model: llava, gpt-4o, gemma3, claude-opus-4-7…
| Action | Shortcut |
|---|---|
| Send message | Enter |
| New line | Shift+Enter |
| Voice input | ⌘⌥ |
| Screenshot | ⌘⇧ |
| Close | Cmd+W |
All shortcuts customizable in Settings.
- No telemetry. No analytics. No crash reporting to any server.
- No cloud. Network calls go only to your configured backend.
- Local backends (Ollama, llama.cpp, LM Studio) run entirely on your machine — nothing leaves it.
- Voice transcription runs on-device via Whisper.
- Screenshots are sent only to your configured backend.
PRs welcome. Open an issue first for large changes.
git clone https://github.com/rbc33/Ghostbar.git
cd Ghostbar
open Package.swift # Xcode opens automaticallyMade with ⬡ — because some things should stay invisible.