Skip to content

KBryan/A0Rust

Repository files navigation

A0Rust — Agent Zero in Rust 🦀

Build License: Apache-2.0 Rust

A production-ready Rust port of Agent Zero — the autonomous AI agent framework. Built for performance, safety, and real-world deployment.

"The same powerful autonomous agent loop — now with Rust's speed, safety, and zero-cost abstractions."


✨ Features

  • 🔄 Full Agent Loop — Monologue-based reasoning with streaming events and intervention support
  • 🤖 Multi-Provider LLM — OpenAI, Anthropic, OpenRouter, Groq, Ollama, Venice + streaming SSE
  • 🧠 Real Embeddings — OpenAI & Ollama embedding APIs with Qdrant or in-memory vector stores
  • 🛠️ 16+ Production Tools — Code execution, search, memory, browser, MCP, document query, vision, and more
  • ⏰ Scheduler Engine — Cron-based task scheduling with AsyncCron (recurring, planned, adhoc tasks)
  • 🌐 Full WebUI — Interactive chat with MetaMask login, markdown rendering, syntax highlighting
  • 📡 WebSocket Streaming — Real-time agent events, tool calls, thoughts, and intervention
  • 💾 Smart History — Topic-based compression with token budget management and LLM summarization
  • 🔥 Prompt Hot-Reload — Mtime-based caching with automatic invalidation when templates change
  • 🧩 Extension System — Built-in hooks: RecallMemories, LogToolCalls + custom extensions
  • 📝 Framework Messages — 19 template-driven error/summary/nudge messages injected at agent loop points
  • 🔗 MCP Support — Model Context Protocol server & client using official rmcp Rust SDK
  • 🌏 Browser Agent — Web navigation and content extraction via reqwest + scraper
  • 📦 Subordinate Delegation — Factory pattern for spawning child agents with session management
  • 🔒 Thread-Safe — DashMap, Arc, RwLock for safe concurrent access
  • 🖥️ Interactive CLI — REPL with slash commands, streaming output, and multi-line input
  • 🧪 290+ Tests — Comprehensive test coverage across all crates

📚 Documentation

  • CLI Tutorial — Complete guide to using A0Rust from the command line
  • API Reference — HTTP/WebSocket API documentation
  • Benchmarks — Criterion benchmarks + Python comparison

🏗️ Architecture

┌─────────────────────────────────────────────────────────┐
│                     a0-cli (entry)                       │
│         Clap CLI + Interactive REPL + Tracing            │
├─────────────────────────────────────────────────────────┤
│                     a0-api (server)                      │
│      Axum HTTP/WS + SIWE Auth + JWT Sessions            │
├─────────────────────────────────────────────────────────┤
│                     a0-tools (17+ tools)                 │
│  response │ code_exec │ search │ memory │ browser │ MCP │ scheduler │
├─────────────────────────────────────────────────────────┤
│                     a0-core (agent loop)                 │
│  Agent │ History │ Prompt │ Extensions │ Framework Msgs │
├──────────────────────┬──────────────────────────────────┤
│    a0-models (LLM)   │       a0-helpers (foundation)    │
│  Chat │ Embed │ SSE  │  Extensions │ Cache │ Config │.. │
└──────────────────────┴──────────────────────────────────┘

Crate Overview

Crate Description
a0-helpers Extension system, caching, config, crypto, dirty JSON parser, file ops, rate limiting
a0-models LLM chat completions & embeddings (OpenAI/Ollama), streaming SSE, multi-provider support
a0-core Agent monologue loop, history compression, prompt hot-reload, framework messages, context extras
a0-tools 17+ tool implementations, VectorStore (in-memory + Qdrant), MCP server/client, browser agent, scheduler engine
a0-api Axum HTTP/WebSocket server, SIWE auth, JWT sessions, memory CRUD endpoints
a0-cli Interactive REPL, one-shot chat, serve mode, config file loading

🚀 Quick Start

Prerequisites

  • Rust 1.88+
  • An LLM API key (OpenAI, Anthropic, Venice, etc.) or local Ollama

Build & Run

# Clone the repository
git clone https://github.com/KBryan/A0Rust.git
cd A0Rust

# Build
cargo build --release

# Interactive REPL (default mode)
cargo run --release

# With OpenAI
cargo run --release -- --env CHAT_MODEL=openai/gpt-4o OPENAI_API_KEY=sk-...

# With Ollama (local, no API key)
cargo run --release -- --env CHAT_MODEL=ollama/llama3.1 API_BASE=http://localhost:11434/v1

# Start API server with WebUI
cargo run --release -- serve --port 8080
# Then open http://localhost:8080 in your browser

# One-shot chat
cargo run --release -- chat "Explain Rust ownership"

# With Qdrant for persistent memory
QDRANT_URL=http://localhost:6334 cargo run --release

Configuration

Create a .env file or copy from .env.example:

# LLM Provider (format: provider/model_name)
CHAT_MODEL=openai/gpt-4o

# API Keys
OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# OPENROUTER_API_KEY=sk-or-...
# GROQ_API_KEY=gsk_...
# VENICE_API_KEY=vce_...

# ─── VeniceAI Configuration ─────────────────────────────────────────────
# Venice provides private, uncensored AI inference.
# CHAT_MODEL=venice/llama-3.1-70b
# VENICE_API_KEY=vce_...
# API_BASE=https://api.venice.ai/api/v1

# Embeddings
EMBEDDING_PROVIDER=openai
EMBEDDING_MODEL=text-embedding-3-small
# Or use Ollama for local embeddings
# EMBEDDING_PROVIDER=ollama
# EMBEDDING_MODEL=nomic-embed-text
# Or use Venice for embeddings
# EMBEDDING_PROVIDER=venice
# EMBEDDING_MODEL=venice-embed-v1

# Vector Store
# QDRANT_URL=http://localhost:6334

# Agent
AGENT_PROFILE=default
RUST_LOG=warn
SERVER_PORT=8080

Connect with MetaMask

The WebUI requires Web3 wallet authentication:

  1. Install MetaMask browser extension
  2. Open http://localhost:8080 in your browser
  3. Click Connect Wallet and sign the SIWE message
  4. Start chatting with the agent

🛠️ Tools

Tool Description
response Ends the agent loop with a final answer to the user
code_execution_tool Runs terminal, Python, or Node.js code with async subprocess management
search_engine Web search via DuckDuckGo with HTML parsing
document_query Reads local files or remote URLs, supports multi-document comparison
memory_load Searches memories by semantic similarity (Qdrant or in-memory)
memory_save Saves text with metadata to the vector store
memory_delete Removes memories by ID
memory_forget Removes memories matching a query above threshold
call_subordinate Delegates tasks to child agents with factory pattern and session management
browser_agent Web navigation and content extraction (reqwest + scraper)
notify_user Sends out-of-band notifications (info, success, warning, error, progress)
wait Pauses execution for a duration or until a timestamp
scheduler Manages scheduled (cron), planned, and adhoc tasks with full execution engine
scheduler_engine Background execution engine using cron_tab for recurring and one-time tasks
text_editor Read, write, and patch files with line-numbered operations
a2a_chat Chat with remote FastA2A-compatible agents

MCP Tools

External MCP servers are configured in config/default.toml:

[mcp]

[mcp.servers.filesystem]
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]

[mcp.servers.fetch]
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-fetch"]

🧪 Testing & Benchmarks

# Run all tests
cargo test

# Run tests for a specific crate
cargo test -p a0-tools

# Run benchmarks
cargo bench --bench core_benchmarks

# Python comparison benchmarks
python benches/python_comparison.py

🐳 Docker

Multi-stage build with Kali Linux runtime for security testing:

# Build the image
docker build -t a0rust .

# Run with OpenAI
docker run -p 8080:8080 \
  -e CHAT_MODEL=openai/gpt-4o \
  -e OPENAI_API_KEY=sk-... \
  a0rust

# Run with VeniceAI
docker run -p 8080:8080 \
  -e CHAT_MODEL=venice/llama-3.1-70b \
  -e VENICE_API_KEY=vce_... \
  a0rust

# Run with persistent data
docker run -p 8080:8080 \
  -v a0rust-data:/app/data \
  -e CHAT_MODEL=openai/gpt-4o \
  -e OPENAI_API_KEY=sk-... \
  a0rust

Pre-installed security tools: nmap, sqlmap, hydra, john, hashcat, nikto, gobuster, ffuf, and more.

---

## 📊 Parity with Python Agent Zero

| Feature | Python v1.7 | A0Rust | Status |
|---------|:-----------:|:------:|:------:|
| Agent monologue loop ||| Complete |
| LLM integration ||| Complete |
| Code execution ||| Complete |
| Search engine ||| Complete |
| Memory (vector DB) ||| Qdrant + in-memory |
| Embeddings ||| OpenAI/Ollama APIs |
| Browser agent | ✅ Playwright | ✅ reqwest+scraper | Basic |
| WebUI ||| SPA + Web3 auth |
| WebSocket ||| Real-time events |
| Framework messages ||| 19 templates |
| History compression ||| Topic + Bulk |
| Prompt hot-reload ||| Mtime cache |
| Subordinate delegation ||| Factory + sessions |
| MCP support ||| rmcp SDK |
| Scheduler execution ||| AsyncCron engine + REST API |
| Browser screenshots | ✅ Playwright || Needs headless Chrome |
| Knowledge indexing || ⚠️ | Checksum tracking pending |

---

## 📜 License

Apache-2.0 — see [LICENSE](LICENSE) for details.

About

Built from the ashes of WizAI. Inspired by Agent 0 and ERC8001

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors