An always-on-top AI overlay for your desktop. The UI is React/Vite, the backend is Tauri (Rust), and all AI requests are routed through the backend to a local Ollama server.
Important: Ollama is not bundled. You must install and run it yourself.
- Windows (supported)
- macOS (dev)
- Linux (not yet)
Requirements:
- Node.js + npm
- Rust toolchain (Tauri)
- Ollama installed and running locally
- Install dependencies
npm install- Run in dev mode
npm run tauri dev- Build a release bundle
npm run tauri build| Action | Shortcut |
|---|---|
| Toggle overlay | Ctrl+Space |
| Focus overlay | Ctrl+Shift+Space |
| Stop generation | Ctrl+. |
| Regenerate last response | Ctrl+Shift+R |
| Input history | Up / Down (caret at start/end) |
Shortcuts are editable in Preferences: click a field and press keys to set it. Backspace clears the field (empty disables). On macOS, some combos are reserved by the system (Cmd+Space for Spotlight, Ctrl+Space for input sources).
| Command | Description |
|---|---|
/clear |
Clear chat history |
/corner <bottom-left|bottom-middle|bottom-right> |
Move the overlay |
This app does not call Ollama from the frontend. All requests are proxied through
the Tauri backend. On startup, the backend checks http://localhost:11434/api/tags.
Models required by default:
gpt-oss:20b-cloud(chat)devstral-small-2:24b-cloud(vision)
If these are missing, pull them with:
ollama pull gpt-oss:20b-cloud
ollama pull devstral-small-2:24b-cloudIf Ollama is not reachable, you will see a modal with:
- Download Ollama
- Retry
Helper script (Windows) to start Ollama with permissive origins:
scripts\start-ollama.ps1Notes:
- The backend logs real connection errors (refused, timeout, non-200).
- If you use a different host/port, update
src-tauri/src/ollama.rs.
For release builds, enter your API key in Preferences. The key is stored in the
system keychain and never written to config.json.
For local dev, you can still set OLLAMA_WEB_SEARCH_API_KEY in .env.local.
The app reads and writes a JSON config file at the Tauri app config dir:
| OS | Path |
|---|---|
| Windows | %APPDATA%\ai-copilot\config.json |
| macOS | ~/Library/Application Support/com.monolabs.ai-copilot/config.json |
Example:
{
"corner": "bottom-middle",
"keybinds": {
"toggle_overlay": "Ctrl+Space",
"focus_overlay": "Ctrl+Shift+Space",
"stop_generation": "Ctrl+.",
"regenerate_last_response": "Ctrl+Shift+R"
},
"appearance": {
"panel_opacity": 0.85,
"show_thinking": true
},
"tools": {
"capture_screen_text_enabled": true,
"web_search_enabled": false
}
}Notes:
corneracceptsbottom-left,bottom-middle,bottom-right.keybindslets you customize global shortcuts. Restart the app after editing.
Use the show_thinking toggle (added in this repo) as a reference:
- Add the field in
src-tauri/src/config.rswith#[serde(default = "...")], plus a default helper andDefaultimpl update. - Mirror the field in
src/shared/config.ts(type +DEFAULT_OVERLAY_CONFIG). - Wire the UI in
src/preferences/Preferences.tsxif it should be editable. - Consume the setting where it matters (example:
src/overlay/Overlay.tsxpassesshowThinkingdown tosrc/overlay/components/MessageBubble.tsx).
Tools are registered in one place and are automatically available to the UI and tool routing.
Steps:
- Copy
src/overlay/tools/toolTemplate.tsto a new file insrc/overlay/tools/, rename the tool name/schema, and implement your parameters. This only defines the schema the model will call. - Register it in
src/overlay/tools/registry.tswith:name(tool call name)tool(schema)handler(exec logic + followup)displayName/activityLabel(UI labels)isEnabled(gate via config flags)
- Implement the actual tool functionality (you own the behavior):
- If it needs native/system access, add a Tauri command in
src-tauri/src/and register it insrc-tauri/src/main.rs. - If it is frontend-only, implement it directly in the tool handler.
- If it needs native/system access, add a Tauri command in
- Expose a config flag if you want a toggle:
src-tauri/src/config.rssrc/shared/config.tssrc/preferences/Preferences.tsxsrc/overlay/Overlay.tsx(pass the flag into tool options)
Notes:
- The registry drives the tool list sent to the model and the local handlers.
- Tool activity uses your Disclosure UI automatically.
- If you need tools in the Agents SDK path, mirror the registration logic in
src/overlay/hooks/useAgentsSdkChat.ts.
- Ollama unreachable:
- Make sure
ollama serveis running onhttp://localhost:11434. - Use the in-app Retry button to re-check availability.
- Make sure
- Model missing:
- Pull the model you want in Ollama or change
src/overlay/constants.ts.
- Pull the model you want in Ollama or change
- No AI responses in release builds:
- Check the app logs; the backend reports real connection errors.
- This app captures text from the active window when you approve it. Treat it like a screen recorder: only use it where you have permission.
- The AI runs locally via Ollama. Large models are slow and memory-heavy.
- Global hotkeys can conflict with other apps. Adjust them in the config.
- You can close the app by right-clicking the tray icon and selecting "Quit".
This project is intentionally minimal and leans on some lazy programming practices that will make your eyes water. Expect rough edges, shortcuts, and few guardrails. Use at your own risk.