RustAIgent is a command‑line coding assistant written in Rust. It leverages large language models (LLMs) to provide interactive code assistance, file and system operations, and more. By exposing rich function‑calling tools, RustAIgent enables the LLM to:
- Inspect and modify local files
- Run shell commands
- Fetch remote resources
- Compile and evaluate code snippets
- Handle batch prompts concurrently
It supports multiple AI backends (OpenAI, Anthropic/Claude, Ollama, Google Generative API), with robust retry and batching logic.
- Architecture
- Features
- Installation
- Configuration
- Usage
- Examples
- Advanced Usage
- Environment Variables
- Contribution
- Roadmap
- License
RustAIgent follows a modular design:
- Core Agent
- Manages conversation state, tool definitions, and dispatch logic.
- Routes requests to the configured provider (OpenAI, Claude, Ollama, or Google).
- Function Calling Layer
- Defines a set of JSON‑schema–based tools (
read_file,write_file,delete_file,list_dir,run_command,fetch_url,eval_code). - Automatically detects and executes tool calls from LLM responses.
- Defines a set of JSON‑schema–based tools (
- Provider Integrations
- OpenAI: Chat Completions API with function calling.
- Anthropic (Claude): Text completion via the Anthropic API.
- Ollama: Local LLM endpoint (e.g.
localhost:11434). - Google: Generative Language API (Chat Bison).
- Reliability & Scalability
- Retry Mechanism: Configurable exponential backoff for failed API calls.
- Batching: Parallel prompt processing using Tokio tasks and
send_batch_requests.
- File I/O:
read_file(path),write_file(path, content),delete_file(path) - Filesystem Operations:
list_dir(path) - Shell Execution:
run_command(command) - HTTP Fetching:
fetch_url(url) - Code Evaluation:
eval_code(code)(stub for secure compilation) - Multi‑Provider: Choose between
openai,claude,ollama,google - Retries & Backoff: Controlled via
RETRY_COUNTandBACKOFF_BASE_MS - Batch Requests: Process multiple prompts concurrently
- Customizable:
MODEL_NAME,MAX_TOKENS,TEMPERATUREvia env vars
- Ensure you have Rust (1.60+) installed. If not, install via rustup.
- Clone the repository:
git clone https://github.com/makalin/RustAIgent.git cd RustAIgent - Build in release mode:
cargo build --release
- The binary will be at
./target/release/RustAIgent.
RustAIgent reads configuration from environment variables (or a .env file). See Environment Variables.
Run the agent and interact via standard input:
./target/release/RustAIgentSwitch providers on the fly:
API_PROVIDER=claude ./target/release/RustAIgent
API_PROVIDER=google ./target/release/RustAIgentDuring the session, prefix commands to invoke tools explicitly, or let the model choose automatically:
You: read_file("config.toml")
You: run_command("cargo fmt -- --check")
You: fetch_url("https://example.com/data.json")
- Reading a file
You: read_file("src/main.rs") RustAIgent: [TOOL] -- Contents of src/main.rs... - Listing directory
You: list_dir(".") RustAIgent: [TOOL] -- src, Cargo.toml, README.md, ... - Writing to a file
You: write_file("note.txt", "This is a test.") RustAIgent: File written successfully. - Running a shell command
You: run_command("ls -la target/release") RustAIgent: target binary, debug symbols, etc.
Use send_batch_requests to handle multiple prompts concurrently in code:
let prompts = vec!["List files in /etc".into(), "Fetch Rust docs".into()];
let responses = agent.send_batch_requests(prompts).await?;
for msg in responses { println!("Response: {}", msg.content); }Adjust retry parameters in .env:
RETRY_COUNT=5
BACKOFF_BASE_MS=200MODEL_NAME=gpt-4o-mini
TEMPERATURE=0.3
MAX_TOKENS=512| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
API key for OpenAI | required |
GOOGLE_API_KEY |
API key for Google Generative API | optional |
API_PROVIDER |
openai, claude, ollama, or google |
openai |
MODEL_NAME |
Model identifier for provider | gpt-4o-mini |
MAX_TOKENS |
Maximum tokens per completion | 1024 |
TEMPERATURE |
Sampling temperature (0.0–1.0) | 0.7 |
RETRY_COUNT |
Number of retry attempts on failure | 3 |
BACKOFF_BASE_MS |
Base backoff duration in ms | 500 |
- Fork this repository
- Create a new branch:
git checkout -b feature/awesome - Commit your changes:
git commit -am 'Add awesome feature' - Push to the branch:
git push origin feature/awesome - Create a Pull Request
Please follow the Rust style guidelines and include tests where appropriate.
- Secure sandbox for
eval_code - Enhanced function parameter validation
- Jittered exponential backoff
- Support for additional LLM providers
- Plugin architecture for custom tools
This project is licensed under the MIT License. See the LICENSE file for details.