A Cursor-style local LLM-powered code editor built with Electron, React, and TypeScript. Run AI code assistance entirely on your own machine — no cloud, no API keys, no data leaving your device.
- Monaco Editor — the same editor that powers VS Code, with syntax highlighting for 50+ languages and multi-tab support
- Local LLM Chat — stream responses from any GGUF model using node-llama-cpp; no internet required
- AI File Creation — the AI can create and write files directly into your workspace from chat responses
- Model Manager — browse and download curated models, or search HuggingFace live for any GGUF file
- File Explorer — full workspace file tree with create, rename, delete support
- Source Control — built-in Git integration (stage, commit, push, pull, branch switching)
- GitHub Panel — view and manage repositories, issues, and pull requests via the GitHub API
- Integrated Terminal — real PTY terminal powered by node-pty and xterm.js
- Settings — configure model download folder, GPU layers, context size, threads, editor font/tab size, Git author, and GitHub PAT
- Command Palette — keyboard-driven command search (Ctrl+Shift+P)
- Node.js 18+
- Windows 10/11 x64 (primary target; macOS/Linux untested)
- A GGUF model file (download one from the Model Manager inside the app)
git clone https://github.com/singhhe/LocalLLMIDE.git
cd LocalLLMIDE
npm install
npm run devNote:
npm installrunselectron-rebuildautomatically to compile native modules (node-pty) for your Electron version. This may take a few minutes on first run.
- Open the AI panel from the activity bar (right side)
- Click Load Model and select a
.gguffile from your local drive, or use the Model Manager to download one - Type your request — the AI streams its response in real time
- Code blocks with a
language:filepathheader are automatically written to your workspace
Click the Models icon in the activity bar to open the Model Manager:
- Curated tab — one-click download of popular models (Mistral, Llama 3, Phi-3, Gemma, DeepSeek, CodeGemma, and more)
- Search HuggingFace tab — live search across all GGUF models on HuggingFace, expand any result to pick a specific quantization file
Models are saved to the folder configured in Settings (defaults to your user data directory).
npm run build:winOutput: release/LocaLLMIDE-<version>-win-x64.zip — extract and run LocaLLMIDE.exe.
Enable Developer Mode in Windows Settings → System → For developers, then:
npm run build:winOutput: release/LocaLLMIDE Setup <version>.exe
Developer Mode is required because the electron-builder signing toolchain extracts macOS dylib symlinks, which need the "Create symbolic links" privilege on Windows.
src/
main/ # Electron main process
ipc/ # IPC handlers (fs, git, llm, settings, terminal)
services/ # Business logic (FileService, LlmService, GitService, ...)
index.ts # Main entry point
preload.ts # Context bridge
renderer/ # React frontend
src/
components/ # UI components (Editor, AiPanel, Sidebar, Terminal, ...)
store/ # Zustand state stores
styles/ # Global CSS and theme variables
shared/
types.ts # Shared TypeScript types
| Layer | Technology |
|---|---|
| Shell | Electron 35 |
| Frontend | React 18 + TypeScript |
| Build | electron-vite + Vite 6 |
| Editor | Monaco Editor |
| LLM runtime | node-llama-cpp v3 (llama.cpp) |
| Terminal | node-pty + xterm.js |
| Git | simple-git |
| GitHub API | @octokit/rest |
| State | Zustand |
| Packaging | electron-builder |
MIT