Skip to content

yanlinpu/atomcode

 
 

Repository files navigation

      _   _                  ____          _
     / \ | |_ ___  _ __ ___ / ___|___   __| | ___
    / _ \| __/ _ \| '_ ` _ \ |   / _ \ / _` |/ _ \
   / ___ \ || (_) | | | | | | |__| (_) | (_| |  __/
  /_/   \_\__\___/|_| |_| |_|\____\___/ \__,_|\___|

Open-source terminal AI coding agent written in Rust

English · 简体中文

Install · Quick Start · Features · Architecture · Development · Contributing

version rust license platform


This project is 100% AI-generated. Every line of code, every architectural decision's implementation, and every commit was written by AI. The human developer serves solely as the decision-maker and product manager — defining what to build, not how to build it.


AtomCode is an AI coding agent that lives in your terminal. Give it a task in natural language, and it will read your codebase, edit files, run commands, and verify its work — autonomously.

Think of it as an open-source alternative to Claude Code / Cursor Agent, but running entirely in your terminal and connecting to any OpenAI-compatible API.

Features

Agent Loop

  • Autonomous multi-step execution — reads files, edits code, runs tests, fixes errors, all in a loop
  • Verification loop — automatically verifies edits via syntax checks before declaring success
  • Dynamic step budget — scales with the number of edited files, capped per turn to bound cost
  • Loop detection — detects and breaks out of repetitive tool-call patterns
  • 3-layer JSON repair — recovers malformed tool-call arguments
  • Turn-level datalog — structured per-turn logs for replay, debugging, and eval harnesses

Built-in Tools

File & shell:

  • read_file, write_file, edit_file, search_replace
  • bash, grep, glob, list_directory, change_dir
  • web_search, web_fetch

Code graph (language-aware code intelligence):

  • list_symbols, read_symbol, find_references
  • trace_callers, trace_callees, trace_chain
  • file_deps, blast_radius

Automation:

  • auto_fix — automatic lint/typecheck fix loop
  • use_skill — invoke a user-defined skill

Multi-Provider Support

Connect to any LLM that supports OpenAI's function-calling API:

Provider Function Calling Tested Models
Claude (Anthropic) Yes Claude Sonnet 4.5/4.6, Opus 4.6
OpenAI Yes GPT-4o, GPT-4.1
DeepSeek Yes DeepSeek V3, DeepSeek R1
Zhipu (GLM) Yes GLM-4, GLM-5
Qwen (Alibaba) Yes Qwen-Plus, Qwen-Max
SiliconFlow Yes Various open models
Ollama (local) Partial Llama 3, Qwen2, etc.
Any OpenAI-compatible API Yes

Sessions & Login

  • Persistent sessions — every conversation is saved; continue the last session with atomcode --continue / -c, or resume/switch inside the TUI with /resume
  • AtomGit OAuth login/login (or atomcode login) pairs your CLI with your AtomGit account
  • SSO login/login-with-sso for GitCode internal users
  • Headless modeatomcode -p "..." runs a single prompt non-interactively and streams the reply on stdout (Claude Code -p style); approval-required bash calls are auto-approved, while other approval-required tools are denied
  • Daemon modeatomcode-daemon exposes an HTTP API for session history and SSE streaming chat

Terminal UI

  • Real-time streaming with markdown rendering and syntax highlighting
  • Code blocks with language labels, line numbers, and base16-ocean.dark theme
  • Multi-line input with Shift+Enter, auto-growing height, input history
  • Text selection with mouse drag, auto-scroll, and clipboard copy
  • Slash commands/model, /provider, /resume, /diff, /undo, /cost, /clear, /compact, etc. (see table below)
  • File attachment — paste file paths to attach content as context
  • Bracketed paste — long paste content collapsed to a compact indicator
  • Skills — user-defined commands loaded from your skill directory, invoked like any slash command

Safety

  • Destructive command detectionrm -rf, git push --force, DROP TABLE, etc. require explicit approval
  • Sensitive file protection — writes to /etc, ~/.ssh, shell configs require approval
  • Per-session permission grants — approve once per tool pattern, or always-allow
  • Source file deletion requires approvalrm on code files is never auto-approved
  • Undo/undo rolls back the last turn's file edits via file-history snapshots

Installation

From Source (recommended)

git clone https://atomgit.com/atomgit_atomcode/atomcode.git
cd atomcode
cargo install --path crates/atomcode-cli --locked

The binary will be generated at target/release/atomcode and installed to ~/.cargo/bin/atomcode for macOS / Linux and $env:USERPROFILE/.cargo/bin/atomcode.exe for Windows. Make sure that ~/.cargo/bin (or %USERPROFILE%\.cargo\bin on Windows) is in your PATH.

To compile without installing, run:

cargo build --release

and the binary will be generated at target/release/atomcode.

Requirements

  • Rust 1.75+ (for building)
  • An API key from any supported provider (or an AtomGit account for /login)

Quick Start

1. First Run

atomcode

On first run, a setup wizard will guide you through configuring your LLM provider:

Welcome to AtomCode! Let's set up your first provider.

Select provider:
  [1] Claude (Anthropic)
  [2] OpenAI
  [3] OpenAI Compatible (DeepSeek, Qwen, Zhipu, Moonshot...)
  [4] Ollama (local)

2. Configuration

Config is stored at ~/.atomcode/config.toml. A minimal single-provider setup looks like this:

default_provider = "deepseek"

[providers.deepseek]
type           = "openai"
api_key        = "sk-..."
model          = "deepseek-chat"
base_url       = "https://api.deepseek.com/v1"
context_window = 64000

You can declare multiple providers and switch between them with /model or /provider. A complete reference covering Claude / OpenAI / OpenAI-compatible endpoints (DeepSeek, GLM, SiliconFlow, OpenRouter...) / Ollama, plus the [datalog] section, lives at docs/config.example.toml — copy and edit the bits you need.

After editing config.toml by hand, run /reload inside atomcode to pick up the changes without restarting.

3. Start Coding

# Open in your project directory
cd your-project
atomcode

# Or specify directory
atomcode -C /path/to/project

# Or specify model
atomcode --model gpt-4o

# Headless (single prompt, reply on stdout)
atomcode -p "Explain the agent loop in this repo"

# Read prompt from file
atomcode --prompt-file task.md

In headless mode, approval-required bash calls are auto-approved and logged to stderr; other approval-required tools are denied.

Then just type what you want:

> Fix the login bug where users get redirected to 404 after OAuth callback

> Add a dark mode toggle to the settings page

> Refactor the database module to use connection pooling

> Write tests for the payment processing module

Keybindings

Input

Key Action
Enter Send message
Shift+Enter New line
Esc Clear input / Cancel stream
Up/Down Browse input history
Tab Accept suggestion
Ctrl+U Clear line
Ctrl+W Delete word
Ctrl+K Delete to end of line

Navigation

Key Action
Ctrl+Up/Down Scroll chat (3 lines)
PageUp/PageDown Scroll chat (page)
Ctrl+L Clear conversation
Ctrl+Shift+C Copy selection
Ctrl+C Cancel operation (double-tap to exit)

Slash Commands

Command Action
/resume Resume or switch session
/session Create a new session
/provider Manage providers
/model Switch model / provider
/login Login with AtomGit OAuth
/cd Change working directory
/undo Undo last turn's edits
/diff Show git diff of current changes
/cost Show token usage for this session
/copy Copy last AI response
/clear Clear conversation
/issue Create issue on AtomGit
/config Edit config file
/status Show login status and model info
/logout Logout from AtomGit
/help Show commands & shortcuts
/quit Exit (or Ctrl+C ×2)

Architecture

AtomCode is a Rust workspace with four crates:

atomcode/
  crates/
    atomcode-core/     # Headless library — no TUI dependency
      agent/           # AgentLoop: autonomous tool-use loop
      turn/            # TurnRunner, datalog, permission decider
      config/          # Config loading, provider configs
      conversation/    # Message types, windowed context
      provider/        # LlmProvider trait + OpenAI/Claude/Ollama
      tool/            # Tool trait + built-in tool implementations
      session/         # Persistent sessions
      skill.rs         # User-defined skills

    atomcode-tui/      # Terminal UI — ratatui + crossterm
      app.rs           # App state machine
      ui/              # Render: chat, input, status bar, markdown

    atomcode-cli/      # Binary entry point (TUI + headless -p mode)
      main.rs          # CLI args, first-run wizard, launch
      auth/            # AtomGit OAuth client

    atomcode-daemon/   # HTTP/SSE API server over atomcode-core

Design Principles

  1. Tech-stack agnostic — never hardcodes language-specific logic. Detects project type dynamically from descriptor files (package.json, Cargo.toml, pyproject.toml, pom.xml, etc.).

  2. Decoupled agentAgentLoop runs as an independent async task, communicating with the TUI via channels (AgentCommand / AgentEvent). The core library has zero TUI dependencies, which is also what makes the daemon possible.

  3. Tool safety — all destructive operations require explicit user approval. Tool failures become LLM observations, never panics.

  4. Context-aware — token-budget-aware conversation windowing, project file-tree injection, and per-turn system reminders keep the model focused without exceeding context limits.

Project Instruction File

Create a .atomcode.md file in your project root to give AtomCode persistent context:

# Project Instructions

This is a Vue 3 + TypeScript project using Pinia for state management.

- Always use Composition API with `<script setup>`
- Use TailwindCSS for styling, no inline styles
- Run `npm run lint` after editing .vue/.ts files

AtomCode reads this file automatically and includes it in the system prompt.

Development

Prerequisites

  • Rust 1.75+ — install via rustup
  • Git
  • A supported LLM provider API key (for runtime testing)

Build from Source

git clone https://atomgit.com/atomgit_atomcode/atomcode.git
cd atomcode

# Debug build (fast compilation, slower runtime)
cargo build

# Release build (slower compilation, optimized binary)
cargo build --release

Run in Development

# Run the TUI directly (debug mode)
cargo run -p atomcode-cli

# With arguments
cargo run -p atomcode-cli -- -C /path/to/project
cargo run -p atomcode-cli -- --model gpt-4o

# Headless mode
cargo run -p atomcode-cli -- -p "summarize this repo"

# Daemon (HTTP API)
cargo run -p atomcode-daemon

Testing

# Run all tests
cargo test

# Run tests for a specific crate
cargo test -p atomcode-core
cargo test -p atomcode-tui

# Run a specific test
cargo test -p atomcode-core test_name

Useful Commands

# Check compilation without building
cargo check

# Format code
cargo fmt

# Run linter
cargo clippy

# Build and install to ~/.cargo/bin
cargo install --path crates/atomcode-cli

Contributing

Contributions are welcome! AtomCode is in active development.

How to Contribute

  1. Fork the repository on AtomGit
  2. Clone your fork locally:
    git clone https://atomgit.com/<your-username>/atomcode.git
    cd atomcode
  3. Create a branch for your change:
    git checkout -b feat/your-feature
    # or
    git checkout -b fix/your-bugfix
  4. Make your changes, ensure the project builds and tests pass:
    cargo build && cargo test && cargo clippy
  5. Commit with a clear message:
    git commit -m "feat: add xxx support"
  6. Push and open a Pull Request against main

Branch Naming

Prefix Purpose
feat/ New feature
fix/ Bug fix
refactor/ Code refactoring (no behavior change)
docs/ Documentation only
chore/ Build, CI, tooling changes

Guidelines

  • Follow the project's core principles — especially tech-stack neutrality (no language/framework-specific logic in the core engine; detect via probes like package.json / Cargo.toml / pom.xml and route through adapters)
  • All tool failures must be graceful — return the error as an observation to the LLM, never panic
  • Destructive operations must require user approval
  • Keep the system prompt compact (~1.5K tokens)
  • Run cargo fmt and cargo clippy before submitting

Where to Start

  • Add a new tool — implement the Tool trait in crates/atomcode-core/src/tool/
  • Add a new provider — implement LlmProvider in crates/atomcode-core/src/provider/
  • Improve the UI — rendering lives in crates/atomcode-tui/src/ui/
  • Fix bugs — check Issues for open bugs

License

MIT License. See LICENSE for details.


Built with Rust, ratatui, and a lot of late nights.

About

An open-source alternative to Claude Code. Connect any LLM, edit code, run commands, and verify changes — autonomously. Built in Rust for speed. Get Started

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Rust 89.0%
  • HTML 7.4%
  • Shell 1.5%
  • Python 0.7%
  • JavaScript 0.6%
  • CSS 0.5%
  • Other 0.3%