dexterAI is a local-first, desktop-native AI workbench for developers. Connect your own API keys, stream responses from multiple providers, run evaluations, and let the agent autonomously work on your codebase — all from one app, with your data staying on your machine.
Connect and chat with all major AI providers in one place.
- 6 Adapters: OpenAI, Anthropic, Google Gemini, GitHub Models, NVIDIA NIM, and Deepgram
- Rich Media: Built-in Speech-to-Text and Text-to-Speech via Deepgram
- Model Catalogue: Browse and compare 195 models across all providers
Transform chat into a powerful autonomous workbench.
- Tool Use: AI-driven filesystem operations and terminal commands
- Security Gates: Explicit user approval required for every destructive OS-level action
- Long-running Loops: Dynamic context trimming for 128k+ token support
Your data, your machine — always.
- Local Persistence: Conversation history and memories stored in local SQLite only
- Native Security: API keys reside strictly in your OS Keychain (never on disk or in the DB)
- No Telemetry: Zero data collection or analytics
- Integrated Terminal: Real-time shell access within the app
- 40fps Streaming: Smooth, buffer-drained token rendering
- Persistent Memory: AI that learns your preferences and project context over time
- Chat Export: Export conversations as Markdown, PDF, or DOCX
| Requirement | Version |
|---|---|
| Node.js | >= 20 LTS |
| pnpm | >= 9 |
| OS | macOS 12+, Windows 10+, Ubuntu 20.04+ |
Download the latest pre-built binary for your platform from the Releases page.
- macOS:
.dmg - Windows:
.exe(NSIS installer) - Linux:
.AppImageor.deb
# 1. Clone the repository
git clone https://github.com/leetcoderman/dexterAI.git
cd dexterAI
# 2. Install all workspace dependencies (from repo root)
pnpm install --frozen-lockfile
# 3. Run in development mode (Electron + Hot Module Reload)
cd apps/desktop
npm run devTo build a distributable for your platform:
# From apps/desktop/
npm run build:mac # macOS (.dmg)
npm run build:win # Windows (.exe)
npm run build:linux # Linux (.AppImage)- Launch the app and complete the 4-step onboarding
- Connect your provider API keys in the Providers screen
- Supported: OpenAI, Anthropic, Google, GitHub Models, NVIDIA NIM, Deepgram
- Keys are stored in your OS Keychain — never in the app's database
- Chat — select a model and start a conversation in the Chat screen
- Agent — open a project folder and use the Code Workspace to let the AI autonomously work on your code
- Evaluate — use the Test Workspace to benchmark models side-by-side
| Layer | Technology |
|---|---|
| Desktop shell | Electron 39 + electron-vite |
| Frontend | React 19 + TypeScript 5.9 + Tailwind CSS |
| State | Zustand (persisted) |
| Database | SQLite (better-sqlite3) — WAL mode + FTS5 |
| Credentials | OS Keychain via keytar |
| Terminal | node-pty + xterm.js |
| Editor | Monaco Editor |
| Build | pnpm workspaces monorepo |
apps/desktop/ # Main Electron application
src/main/ # Node.js main process (adapters, IPC, DB)
src/renderer/ # React frontend
src/preload/ # contextBridge API surface
packages/
registry-types/ # Shared TypeScript interfaces
shared-utils/ # Utility functions
i18n/ # i18next setup (English)
antigravity/ # Internal UI component library
registry/
registry.json # 195-model static catalogue (CDN fallback)
Contributions are welcome! Please read CONTRIBUTING.md before submitting a pull request.
For responsible disclosure of security vulnerabilities, see SECURITY.md. Do not open public Issues for security bugs.
See CHANGELOG.md for a full version history.
ISC © 2026 dexterAI