Rust agent orchestration library for tool-using, long-horizon, traceable AI systems.
Build agents directly in Rust with typed tools, streaming events, durable sessions, JSONL traces, and one runtime that can target multiple LLM providers.
Why Appam • Installation • Quickstart • Tools • Streaming & Sessions • Providers • Examples & Docs
Tip
If Appam is useful, ⭐ Star the repo. It materially helps the project reach more systems thus better and reliable agents for all of us.
Appam is for agent systems that need more than a toy chat loop. It is designed for workloads where the hard parts are operational: multi-turn tool use, session continuation, event streaming, traceability, provider switching, and reliability under repeated runs.
The name
appamis derived from the malayalam saying "അപ്പം പോലെ ചുടാം", which roughly means "as easy as baking an appam."
- Rust-first agent construction with
Agent::quick(...),Agent::new(...), andAgentBuilder - Typed tool system using the
#[tool]macro, directToolimplementations, orClosureTool - Streaming by default through
StreamBuilder,StreamConsumer, and built-in consumers - Durable sessions with SQLite-backed
SessionHistoryandcontinue_session(...) - Traceable runs through built-in JSONL traces and structured stream events
- Provider portability across Anthropic, OpenAI, OpenAI Codex, OpenRouter, Vertex, Azure, and Bedrock
- Production controls for retries, continuation mechanics, reasoning, caching, rate limiting, and provider-specific tuning
Appam fits best when you want to build agents like:
- Coding agents that read files, write files, and run commands
- Research or operations agents that loop through tools over many turns
- Services that need streaming output plus session persistence
- Internal automation where runs must be inspectable after the fact
- Systems that may need to switch providers without rewriting the agent runtime
If your agent is mostly "prompt in, string out", Appam still works, but its real value shows up once orchestration, tools, and observability matter.
Add the crate and Tokio:
cargo add appam
cargo add tokio --features macros,rt-multi-threadIf you plan to define typed tool inputs, serde is useful too:
cargo add serde --features deriveOr add dependencies manually:
[dependencies]
appam = "0.1"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
serde = { version = "1", features = ["derive"] }Set credentials for the provider you want to use:
| Provider | Minimum setup |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| OpenAI Codex | OPENAI_CODEX_ACCESS_TOKEN or a cached login in ~/.appam/auth.json |
| OpenRouter | OPENROUTER_API_KEY |
| Vertex | GOOGLE_VERTEX_API_KEY, GOOGLE_API_KEY, GEMINI_API_KEY, or GOOGLE_VERTEX_ACCESS_TOKEN |
| Azure OpenAI | AZURE_OPENAI_API_KEY and AZURE_OPENAI_RESOURCE |
| Azure Anthropic | AZURE_API_KEY or AZURE_ANTHROPIC_API_KEY, plus AZURE_ANTHROPIC_BASE_URL or AZURE_ANTHROPIC_RESOURCE |
| AWS Bedrock | AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, or AWS_BEARER_TOKEN_BEDROCK |
Common model override variables:
ANTHROPIC_MODELOPENAI_MODELOPENAI_CODEX_MODELOPENROUTER_MODELGOOGLE_VERTEX_MODELAZURE_OPENAI_MODELAZURE_ANTHROPIC_MODELAWS_BEDROCK_MODEL_ID
The smallest useful Appam program is a Rust agent with streaming output:
use appam::prelude::*;
#[tokio::main]
async fn main() -> Result<()> {
let agent = Agent::quick(
"anthropic/claude-sonnet-4-5",
"You are a concise Rust assistant.",
vec![],
)?;
agent
.stream("Explain ownership in Rust in three sentences.")
.on_content(|text| print!("{text}"))
.run()
.await?;
println!();
Ok(())
}Agent::quick(...) is the fast path:
- infers the provider from the model string
- creates a
RuntimeAgent - applies sensible defaults for temperature, max tokens, top-p, and retries
Examples of model strings Appam recognizes:
| Model string | Provider |
|---|---|
anthropic/claude-sonnet-4-5 |
Anthropic |
openai/gpt-5.4 |
OpenAI |
openai-codex/gpt-5.4 |
OpenAI Codex |
openrouter/anthropic/claude-sonnet-4-5 |
OpenRouter Responses |
vertex/gemini-2.5-flash |
Vertex |
gemini-2.5-pro |
Vertex |
Appam's recommended tool path is native Rust.
The #[tool] macro turns normal Rust functions into runtime tools with generated JSON Schema and argument decoding:
use appam::prelude::*;
#[derive(Deserialize, Schema)]
struct AddInput {
#[description = "First number"]
a: i64,
#[description = "Second number"]
b: i64,
}
#[derive(Serialize)]
struct AddOutput {
sum: i64,
}
#[tool(description = "Add two integers together")]
fn add(input: AddInput) -> Result<AddOutput> {
Ok(AddOutput {
sum: input.a + input.b,
})
}
#[tokio::main]
async fn main() -> Result<()> {
let agent = Agent::new("calculator", "anthropic/claude-sonnet-4-5")
.prompt("You are a precise calculator. Always use the add tool for arithmetic.")
.tool(add())
.build()?;
agent
.stream("What is 42 + 58?")
.on_tool_call(|name, args| println!("[tool] {name} {args}"))
.on_tool_result(|name, result| println!("[result] {name} = {result}"))
.on_content(|text| print!("{text}"))
.run()
.await?;
println!();
Ok(())
}You can define tools in three main ways:
#[tool]for the best Rust DXTooltrait implementations for full controlClosureToolfor fast inline utilities
Key tooling types:
| Type | Purpose |
|---|---|
#[tool] |
Generate a Tool implementation from a function |
Schema |
Derive JSON Schema for typed input structs |
Tool |
Core trait for tool execution |
ToolRegistry |
Shared registry for tool lookup and execution |
ClosureTool |
Lightweight runtime tool defined from a closure |
Appam gives you three Rust-first ways to construct agents:
Use this when you want the smallest amount of setup.
Best for:
- scripts
- prototypes
- simple services
- smoke tests against a provider
This is the ergonomic middle ground. It returns an AgentBuilder with provider detection already applied.
let agent = Agent::new("assistant", "openai/gpt-5.4")
.prompt("You are a helpful assistant.")
.tool(add()) // Reuse any Tool generated with #[tool]
.build()?;Use this when you need explicit provider configuration and runtime control:
- reasoning or thinking settings
- prompt caching
- retry behavior
- rate limiting
- traces and session history
- Azure or Bedrock-specific provider setup
Streaming is a first-class part of the runtime, not an afterthought. Every run can emit structured events for text, reasoning, tool calls, tool results, usage updates, and completion.
For most applications, use agent.stream(...) and attach handlers:
use appam::prelude::*;
let session = agent
.stream("Analyze this repository layout")
.on_session_started(|id| println!("session: {id}"))
.on_content(|text| print!("{text}"))
.on_reasoning(|text| eprint!("{text}"))
.on_tool_call(|name, args| println!("\n[calling {name}] {args}"))
.on_tool_result(|name, result| println!("[done {name}] {result}"))
.on_tool_failed(|name, error| eprintln!("[failed {name}] {error}"))
.on_error(|error| eprintln!("error: {error}"))
.on_done(|| println!("\ncomplete"))
.run()
.await?;If you need reusable pipelines, Appam also exposes StreamConsumer plus built-in consumers such as:
ConsoleConsumerChannelConsumerCallbackConsumerTraceConsumer
If you want continuation across runs, enable history on the agent:
use appam::prelude::*;
let agent = Agent::new("researcher", "anthropic/claude-sonnet-4-5")
.prompt("You are a research assistant.")
.enable_history()
.history_db_path("data/sessions.db")
.auto_save_sessions(true)
.build()?;
let first = agent.run("What is Rust?").await?;
let second = agent
.continue_session(&first.session_id, "How does ownership work?")
.await?;
println!("continued session: {}", second.session_id);For direct history operations, use SessionHistory:
use appam::prelude::*;
let mut config = HistoryConfig::default();
config.enabled = true;
config.db_path = "data/sessions.db".into();
let history = SessionHistory::new(config).await?;
let sessions = history.list_sessions().await?;
println!("stored sessions: {}", sessions.len());Enable built-in traces on the agent when you want replayable, inspectable runs:
use appam::prelude::*;
let agent = Agent::new("audited-agent", "openai/gpt-5.4")
.prompt("You are a careful assistant.")
.enable_traces()
.trace_format(TraceFormat::Detailed)
.build()?;This gives you structured event output that is much easier to inspect than plain console logs.
Appam exposes one orchestration surface across multiple LLM providers:
| Provider | Runtime path |
|---|---|
| Anthropic Messages API | LlmProvider::Anthropic or anthropic/... / claude-* model strings |
| OpenAI Responses API | LlmProvider::OpenAI or openai/... / gpt-* / o1-* / o3-* model strings |
| OpenAI Codex Responses API | LlmProvider::OpenAICodex or openai-codex/... model strings |
| OpenRouter Responses API | LlmProvider::OpenRouterResponses or openrouter/... model strings |
| OpenRouter Completions API | LlmProvider::OpenRouterCompletions |
| Google Vertex AI | LlmProvider::Vertex or vertex/... / gemini-* / google/gemini-* model strings |
| Azure OpenAI | LlmProvider::AzureOpenAI { .. } |
| Azure Anthropic | LlmProvider::AzureAnthropic { .. } |
| AWS Bedrock | LlmProvider::Bedrock { .. } |
Notes:
Agent::quick(...)andAgent::new(...)auto-detect common providers from the model string.- Unknown model strings fall back to OpenRouter Responses.
- Azure and Bedrock are best configured explicitly through
AgentBuilder.
Appam includes the runtime controls that usually get bolted on later:
- retries with exponential backoff
- reasoning configuration for provider families that support it
- Anthropic thinking, caching, tool choice, and rate limiting
- OpenRouter provider preferences and transform controls
- OpenAI service tier and text verbosity settings
- maximum continuations and required completion tools for long-running flows
That lets you keep the orchestration layer inside Rust instead of scattering runtime rules across wrapper scripts.
- Getting started: installation
- Getting started: quickstart
- Getting started: first agent with tools
- Core concepts: agents
- Core concepts: tools
- Core concepts: streaming
- Core concepts: sessions
- Core concepts: providers
- Anthropic coding agent
- OpenAI coding agent
- OpenAI Codex coding agent
- OpenRouter Responses coding agent
- OpenRouter Completions coding agent
- Vertex coding agent
- Azure OpenAI coding agent
- Azure Anthropic coding agent
- Bedrock coding agent
cargo fmt
cargo clippy --all-targets --all-features
cargo test