_ _ ____ _
/ \ | |_ ___ _ __ ___ / ___|___ __| | ___
/ _ \| __/ _ \| '_ ` _ \ | / _ \ / _` |/ _ \
/ ___ \ || (_) | | | | | | |__| (_) | (_| | __/
/_/ \_\__\___/|_| |_| |_|\____\___/ \__,_|\___|
Open-source terminal AI coding agent written in Rust
English · 简体中文
Install · Quick Start · Features · Architecture · Development · Contributing
This project is 100% AI-generated. Every line of code, every architectural decision's implementation, and every commit was written by AI. The human developer serves solely as the decision-maker and product manager — defining what to build, not how to build it.
AtomCode is an AI coding agent that lives in your terminal. Give it a task in natural language, and it will read your codebase, edit files, run commands, and verify its work — autonomously.
Think of it as an open-source alternative to Claude Code / Cursor Agent, but running entirely in your terminal and connecting to any OpenAI-compatible API.
- Autonomous multi-step execution — reads files, edits code, runs tests, fixes errors, all in a loop
- Verification loop — automatically verifies edits via syntax checks before declaring success
- Dynamic step budget — scales with the number of edited files, capped per turn to bound cost
- Loop detection — detects and breaks out of repetitive tool-call patterns
- 3-layer JSON repair — recovers malformed tool-call arguments
- Turn-level datalog — structured per-turn logs for replay, debugging, and eval harnesses
File & shell:
read_file,write_file,edit_file,search_replacebash,grep,glob,list_directory,change_dirweb_search,web_fetch
Code graph (language-aware code intelligence):
list_symbols,read_symbol,find_referencestrace_callers,trace_callees,trace_chainfile_deps,blast_radius
Automation:
auto_fix— automatic lint/typecheck fix loopuse_skill— invoke a user-defined skill
Connect to any LLM that supports OpenAI's function-calling API:
| Provider | Function Calling | Tested Models |
|---|---|---|
| Claude (Anthropic) | Yes | Claude Sonnet 4.5/4.6, Opus 4.6 |
| OpenAI | Yes | GPT-4o, GPT-4.1 |
| DeepSeek | Yes | DeepSeek V3, DeepSeek R1 |
| Zhipu (GLM) | Yes | GLM-4, GLM-5 |
| Qwen (Alibaba) | Yes | Qwen-Plus, Qwen-Max |
| SiliconFlow | Yes | Various open models |
| Ollama (local) | Partial | Llama 3, Qwen2, etc. |
| Any OpenAI-compatible API | Yes | — |
- Persistent sessions — every conversation is saved; continue the last session with
atomcode --continue/-c, or resume/switch inside the TUI with/resume - AtomGit OAuth login —
/login(oratomcode login) pairs your CLI with your AtomGit account - SSO login —
/login-with-ssofor GitCode internal users - Headless mode —
atomcode -p "..."runs a single prompt non-interactively and streams the reply on stdout (Claude Code-pstyle); approval-requiredbashcalls are auto-approved, while other approval-required tools are denied - Daemon mode —
atomcode-daemonexposes an HTTP API for session history and SSE streaming chat
- Real-time streaming with markdown rendering and syntax highlighting
- Code blocks with language labels, line numbers, and
base16-ocean.darktheme - Multi-line input with Shift+Enter, auto-growing height, input history
- Text selection with mouse drag, auto-scroll, and clipboard copy
- Slash commands —
/model,/provider,/resume,/diff,/undo,/cost,/clear,/compact, etc. (see table below) - File attachment — paste file paths to attach content as context
- Bracketed paste — long paste content collapsed to a compact indicator
- Skills — user-defined commands loaded from your skill directory, invoked like any slash command
- Destructive command detection —
rm -rf,git push --force,DROP TABLE, etc. require explicit approval - Sensitive file protection — writes to
/etc,~/.ssh, shell configs require approval - Per-session permission grants — approve once per tool pattern, or always-allow
- Source file deletion requires approval —
rmon code files is never auto-approved - Undo —
/undorolls back the last turn's file edits via file-history snapshots
git clone https://atomgit.com/atomgit_atomcode/atomcode.git
cd atomcode
cargo install --path crates/atomcode-cli --lockedThe binary will be generated at target/release/atomcode and installed to
~/.cargo/bin/atomcode for macOS / Linux and $env:USERPROFILE/.cargo/bin/atomcode.exe
for Windows. Make sure that ~/.cargo/bin (or %USERPROFILE%\.cargo\bin on Windows) is
in your PATH.
To compile without installing, run:
cargo build --releaseand the binary will be generated at target/release/atomcode.
- Rust 1.75+ (for building)
- An API key from any supported provider (or an AtomGit account for
/login)
atomcodeOn first run, a setup wizard will guide you through configuring your LLM provider:
Welcome to AtomCode! Let's set up your first provider.
Select provider:
[1] Claude (Anthropic)
[2] OpenAI
[3] OpenAI Compatible (DeepSeek, Qwen, Zhipu, Moonshot...)
[4] Ollama (local)
Config is stored at ~/.atomcode/config.toml. A minimal single-provider
setup looks like this:
default_provider = "deepseek"
[providers.deepseek]
type = "openai"
api_key = "sk-..."
model = "deepseek-chat"
base_url = "https://api.deepseek.com/v1"
context_window = 64000You can declare multiple providers and switch between them with /model
or /provider. A complete reference covering Claude / OpenAI /
OpenAI-compatible endpoints (DeepSeek, GLM, SiliconFlow, OpenRouter...) /
Ollama, plus the [datalog] section, lives at
docs/config.example.toml — copy and edit the
bits you need.
After editing config.toml by hand, run /reload inside atomcode to pick
up the changes without restarting.
# Open in your project directory
cd your-project
atomcode
# Or specify directory
atomcode -C /path/to/project
# Or specify model
atomcode --model gpt-4o
# Headless (single prompt, reply on stdout)
atomcode -p "Explain the agent loop in this repo"
# Read prompt from file
atomcode --prompt-file task.mdIn headless mode, approval-required bash calls are auto-approved and logged to stderr; other approval-required tools are denied.
Then just type what you want:
> Fix the login bug where users get redirected to 404 after OAuth callback
> Add a dark mode toggle to the settings page
> Refactor the database module to use connection pooling
> Write tests for the payment processing module
| Key | Action |
|---|---|
Enter |
Send message |
Shift+Enter |
New line |
Esc |
Clear input / Cancel stream |
Up/Down |
Browse input history |
Tab |
Accept suggestion |
Ctrl+U |
Clear line |
Ctrl+W |
Delete word |
Ctrl+K |
Delete to end of line |
| Key | Action |
|---|---|
Ctrl+Up/Down |
Scroll chat (3 lines) |
PageUp/PageDown |
Scroll chat (page) |
Ctrl+L |
Clear conversation |
Ctrl+Shift+C |
Copy selection |
Ctrl+C |
Cancel operation (double-tap to exit) |
| Command | Action |
|---|---|
/resume |
Resume or switch session |
/session |
Create a new session |
/provider |
Manage providers |
/model |
Switch model / provider |
/login |
Login with AtomGit OAuth |
/cd |
Change working directory |
/undo |
Undo last turn's edits |
/diff |
Show git diff of current changes |
/cost |
Show token usage for this session |
/copy |
Copy last AI response |
/clear |
Clear conversation |
/issue |
Create issue on AtomGit |
/config |
Edit config file |
/status |
Show login status and model info |
/logout |
Logout from AtomGit |
/help |
Show commands & shortcuts |
/quit |
Exit (or Ctrl+C ×2) |
AtomCode is a Rust workspace with four crates:
atomcode/
crates/
atomcode-core/ # Headless library — no TUI dependency
agent/ # AgentLoop: autonomous tool-use loop
turn/ # TurnRunner, datalog, permission decider
config/ # Config loading, provider configs
conversation/ # Message types, windowed context
provider/ # LlmProvider trait + OpenAI/Claude/Ollama
tool/ # Tool trait + built-in tool implementations
session/ # Persistent sessions
skill.rs # User-defined skills
atomcode-tui/ # Terminal UI — ratatui + crossterm
app.rs # App state machine
ui/ # Render: chat, input, status bar, markdown
atomcode-cli/ # Binary entry point (TUI + headless -p mode)
main.rs # CLI args, first-run wizard, launch
auth/ # AtomGit OAuth client
atomcode-daemon/ # HTTP/SSE API server over atomcode-core
-
Tech-stack agnostic — never hardcodes language-specific logic. Detects project type dynamically from descriptor files (
package.json,Cargo.toml,pyproject.toml,pom.xml, etc.). -
Decoupled agent —
AgentLoopruns as an independent async task, communicating with the TUI via channels (AgentCommand/AgentEvent). The core library has zero TUI dependencies, which is also what makes the daemon possible. -
Tool safety — all destructive operations require explicit user approval. Tool failures become LLM observations, never panics.
-
Context-aware — token-budget-aware conversation windowing, project file-tree injection, and per-turn system reminders keep the model focused without exceeding context limits.
Create a .atomcode.md file in your project root to give AtomCode persistent context:
# Project Instructions
This is a Vue 3 + TypeScript project using Pinia for state management.
- Always use Composition API with `<script setup>`
- Use TailwindCSS for styling, no inline styles
- Run `npm run lint` after editing .vue/.ts filesAtomCode reads this file automatically and includes it in the system prompt.
- Rust 1.75+ — install via rustup
- Git
- A supported LLM provider API key (for runtime testing)
git clone https://atomgit.com/atomgit_atomcode/atomcode.git
cd atomcode
# Debug build (fast compilation, slower runtime)
cargo build
# Release build (slower compilation, optimized binary)
cargo build --release# Run the TUI directly (debug mode)
cargo run -p atomcode-cli
# With arguments
cargo run -p atomcode-cli -- -C /path/to/project
cargo run -p atomcode-cli -- --model gpt-4o
# Headless mode
cargo run -p atomcode-cli -- -p "summarize this repo"
# Daemon (HTTP API)
cargo run -p atomcode-daemon# Run all tests
cargo test
# Run tests for a specific crate
cargo test -p atomcode-core
cargo test -p atomcode-tui
# Run a specific test
cargo test -p atomcode-core test_name# Check compilation without building
cargo check
# Format code
cargo fmt
# Run linter
cargo clippy
# Build and install to ~/.cargo/bin
cargo install --path crates/atomcode-cliContributions are welcome! AtomCode is in active development.
- Fork the repository on AtomGit
- Clone your fork locally:
git clone https://atomgit.com/<your-username>/atomcode.git cd atomcode
- Create a branch for your change:
git checkout -b feat/your-feature # or git checkout -b fix/your-bugfix - Make your changes, ensure the project builds and tests pass:
cargo build && cargo test && cargo clippy
- Commit with a clear message:
git commit -m "feat: add xxx support" - Push and open a Pull Request against
main
| Prefix | Purpose |
|---|---|
feat/ |
New feature |
fix/ |
Bug fix |
refactor/ |
Code refactoring (no behavior change) |
docs/ |
Documentation only |
chore/ |
Build, CI, tooling changes |
- Follow the project's core principles — especially tech-stack neutrality
(no language/framework-specific logic in the core engine; detect via probes
like
package.json/Cargo.toml/pom.xmland route through adapters) - All tool failures must be graceful — return the error as an observation to the LLM, never panic
- Destructive operations must require user approval
- Keep the system prompt compact (~1.5K tokens)
- Run
cargo fmtandcargo clippybefore submitting
- Add a new tool — implement the
Tooltrait incrates/atomcode-core/src/tool/ - Add a new provider — implement
LlmProviderincrates/atomcode-core/src/provider/ - Improve the UI — rendering lives in
crates/atomcode-tui/src/ui/ - Fix bugs — check Issues for open bugs
MIT License. See LICENSE for details.
Built with Rust, ratatui, and a lot of late nights.