An open-source, agent-first desktop workspace — Agent, Editor, Git, Terminal, all in one place.
Own your AI workflow: local-first, BYOK, and fully hackable.
Async IDE is built around a simple idea: the agent should be the center of gravity, not a chat panel bolted onto the side of an editor. Everything — workspace access, tool execution, diff review, terminal operations — revolves around a transparent Think → Plan → Execute → Observe loop that you can see, steer, and interrupt at any time.
It is also built from scratch on Electron + React + Monaco — not a fork of VS Code. That means a smaller, fully hackable codebase, faster iteration, and no baggage from an editor you never asked for.
The project is Apache 2.0, BYOK for model access, and local-first by default: your threads, settings, and plans live on your machine, not in someone else's cloud.
Async IDE is an open-source AI-native desktop application designed as your command center for working with coding agents. It starts from the Agent Loop and brings multi-model conversations, autonomous tool execution, and review workflows into a single workspace.
- Agent-first — The agent can directly access your workspace, tools, and terminal through a clear Think → Plan → Execute → Observe loop.
- Transparent process — Streaming tool parameters (JSON rendered as it generates) + tool trajectory cards (
Read,Write,Edit,Glob,Grep, Shell, etc.), so every step is visible. - Full control — Use your own API keys, keep conversation history and repo state entirely local, with no dependency on cloud services.
- Git-native — Status, diffs, and agent-driven changes stay in sync with your actual repository.
- Four Composer modes — Agent (autonomous execution), Plan (review first, then run), Ask (read-only Q&A), and Debug (systematic troubleshooting), covering various development scenarios.
- IM bot bridge — Wire Telegram, Slack, Discord, and Feishu (Lark) into the same Agent / Team toolchain as the desktop app, with per-integration model, workspace roots, allowlists, optional HTTP proxy, and streaming replies where the platform supports it.
- Lean shell, built from scratch — Electron + React + Monaco, Agent / Editor dual layout, embedded terminal. Not a VS Code fork — smaller surface area, fewer abstractions to fight, and every line is yours to change.
- Streaming tool parameters with trajectory cards for clear execution visibility.
- Plan and Agent dual modes: review the plan first, or let the agent run directly.
- Approval gates for shell commands and file writes.
- Editor context sync so agent edits can focus on the relevant file and line range.
- Support for nested sub-agents, background execution, and timeline-style activity rendering.
- Built-in adapters for Anthropic, OpenAI, and Gemini.
- Support for OpenAI-compatible endpoints like Ollama, vLLM, aggregators, or self-hosted services.
- Streaming thinking blocks on supported models.
- Auto mode to automatically pick the best available model.
- Monaco editor with multi-tab support, syntax highlighting, and diff review flows.
- Git integration: status, diff, staging, commit, and push all available from the UI.
- xterm.js terminal: for both user commands and observing agent shell operations.
- Composer with
@file mentions, rich segments, and persistent threads. - Quick Open palette (
Ctrl/Cmd+P) and keyboard-first navigation. - Built-in i18n support for English and Simplified Chinese.
- Support for local disk skills, workspace config merge, and tool approval controls.
Async can act as the host for coding agents on external chat surfaces, not only inside the Electron UI.
- Platforms — Telegram, Slack, Discord, and Feishu (Lark) via dedicated adapters under
main-src/bots/platforms/. - Same runtime — Inbound messages run through
botRuntime: normal threads useagentLoop, while Team mode uses the sameteamOrchestratorpath as the desktop Composer, including worker streaming and tool status where applicable. - Per integration — Enable/disable, display name, default model, default Composer mode (
agent/ask/plan/team), workspace root(s), optional allowlists for chats and users, and an extra system prompt on top of project rules. - Connectivity — Optional HTTP proxy URL per platform (shared pattern for token calls and webhooks) when vendor APIs must go through a corporate proxy.
- Feishu — App credentials, optional encryption, streaming interactive cards for long-running replies, and session hygiene when integration settings change.
- Configuration UI — Managed from Settings → Bots (
SettingsBotsPanel.tsx).
For a deeper module-level walkthrough, see the maintainer-oriented notes under docs/llm-wiki/.
┌─────────────────────────────────────────────────────────┐
│ Renderer Process │
│ React + Vite │ Monaco Editor │ xterm.js Terminal │
│ Composer / Chat / Plan / Agent UI │
└──────────────────────────┬──────────────────────────────┘
│ contextBridge (IPC)
┌──────────────────────────▼──────────────────────────────┐
│ Main Process │
│ agentLoop.ts │ toolExecutor.ts │ LLM Adapters │
│ gitService │ threadStore │ settingsStore │
│ workspace │ LSP session │ PTY terminal │
└─────────────────────────────────────────────────────────┘
| Technology | Version | Purpose |
|---|---|---|
| React | ^19.2.4 | UI framework |
| Electron | 41.1.0 | Desktop app shell |
| Vite | ^6.0.3 | Build tool & dev server |
| TypeScript | ^5.9.3 | Type-safe development |
| Monaco Editor | ^0.52.0 | Code editor component |
| xterm.js | ^5.5.0 | Terminal emulator |
| OpenAI SDK | ^4.96.0 | OpenAI API client |
| Anthropic SDK | ^0.39.0 | Claude API client |
| Google Generative AI | ^0.21.0 | Gemini API client |
| MCP SDK | ^1.29.0 | Model Context Protocol |
| node-pty | ^1.1.0 | PTY terminal support |
- Built from scratch on Electron + React + Monaco — not a VS Code fork. The architecture is intentionally lean: two processes (main + renderer), clear IPC boundaries, and no inherited extension ecosystem to maintain.
**agentLoop.ts** handles multi-round tool calls, partial JSON streaming, tool repair, and aborts.- Structured assistant messages are persisted locally and expanded to provider-native tool formats when needed.
- Local persistence stores threads, settings, and plans as JSON / Markdown under user data.
**gitService** provides the Git layer used by the UI for status, diff, staging, commit, and push.- LSP integration uses TypeScript Language Server for in-editor intelligence.
Async/
├── main-src/ # Bundled -> electron/main.bundle.cjs (Node / Electron main)
│ ├── index.ts # App entry: windows, userData, IPC registration
│ ├── agent/ # agentLoop.ts, toolExecutor.ts, agentTools.ts, toolApprovalGate.ts
│ ├── llm/ # OpenAI / Anthropic / Gemini adapters & streaming
│ ├── lsp/ # TypeScript LSP session
│ ├── mcp/ # Model Context Protocol integration
│ ├── memdir/ # Memory directory management
│ ├── bots/ # IM bot controller, runtime, connectivity, platform adapters
│ ├── ipc/register.ts # ipcMain handlers (chat, threads, git, fs, agent, ...)
│ ├── shell/ # Shell command execution
│ ├── threadStore.ts # Persistent threads + messages (JSON)
│ ├── settingsStore.ts # settings.json
│ ├── gitService.ts # Porcelain status, diff previews, commit/push
│ ├── workspace.ts # Open-folder root & safe path resolution
│ ├── workspaceFileIndex.ts # File indexing for workspace
│ ├── workspaceSemanticIndex.ts # Semantic search indexing
│ ├── workspaceSymbolIndex.ts # Symbol indexing
│ └── workspaceUsageStats.ts # Workspace usage statistics
├── src/ # Vite + React renderer
│ ├── App.tsx # Shell layout, chat, composer modes, Git / explorer
│ ├── AgentChatPanel.tsx # Agent chat interface
│ ├── AgentLeftSidebar.tsx # Agent activity sidebar
│ ├── AgentRightSidebar.tsx # Agent tools and results
│ ├── ChatComposer.tsx # Message composer component
│ ├── EditorMainPanel.tsx # Monaco editor panel
│ ├── SettingsPage.tsx # Settings UI
│ ├── SettingsBotsPanel.tsx # IM bot integrations (Telegram / Slack / Discord / Feishu)
│ ├── WorkspaceExplorer.tsx # File explorer
│ ├── hooks/ # Custom React hooks (19 files)
│ ├── i18n/ # Locale messages (en / zh-CN)
│ └── ... # Agent UI, Plan review, Monaco, terminal, ...
├── electron/
│ ├── main.bundle.cjs # esbuild output (do not edit by hand)
│ └── preload.cjs # contextBridge -> window.asyncShell
├── docs/assets/ # Logo, screenshots
├── scripts/
│ └── export-app-icon.mjs # Rasterize SVG -> resources/icons/icon.png
├── esbuild.main.mjs # Builds main process
├── vite.config.ts # Renderer build
└── package.json
Default location under Electron's userData directory:
async/threads.json: threads and chat messages.async/settings.json: model configuration, API keys, layout, agent options, andbots.integrations(Telegram / Slack / Discord / Feishu tokens, proxy URLs, allowlists, defaults)..async/plans/: Markdown plan documents generated in Plan mode.
The renderer may use localStorage for lightweight UI state, but the authoritative data source for conversations is threads.json.
- Node.js >= 18
- npm >= 9
- Git (recommended)
-
Clone the repository:
git clone https://github.com/ZYKJShadow/Async.git cd AsyncIf you prefer Gitee, you can also use:
git clone https://gitee.com/shadowsocks_z/Async.git cd Async -
Install dependencies:
npm install
-
Build and launch the desktop app:
npm run desktop
This will build both the main and renderer processes, then open the app with Electron.
npm run devTo open DevTools during development:
npm run dev:debugnpm run iconsThis will rasterize docs/assets/async-logo.svg into resources/icons/icon.png and public/favicon.png.
We are grateful to the open-source community and projects like Claude Code that helped demonstrate the power of agent-driven development — Async IDE builds on that momentum with its own take on transparent, local-first AI workflows.
Have questions, ideas, or just want to chat with a community of developers?
- Forum: linux.do — Join the discussion, share your setup, report issues, and stick around.
This project is open-sourced under the Apache License 2.0.









