A composable AI agent runtime for Elixir. Provides a complete agent loop with skills, working memory, knowledge persistence, and tool use. Drop it into any Elixir project to get a fully functional AI agent.
- Composable Pipeline — Middleware-style stage pipeline lets you mix and match agent behaviors
- Multiple Profiles — Eight built-in profiles:
agentic,agentic_planned,turn_by_turn,conversational,claude_code,opencode,codex,acp - Tool Execution — Built-in file operations, bash, subagent delegation, and extensibility for custom tools
- Tool Activation — Lazy tool discovery and activation with budget-limited promotion to first-class tools
- Skills System — YAML-defined skills that extend agent capabilities at runtime
- Working Memory — Context keeper with fact extraction and commitment detection
- Persistence — Transcript, plan, and knowledge persistence with pluggable backends
- Model Router — Manual tier-based or automatic analysis-based model selection
- Strategy Layer — Pluggable orchestration strategies that control run preparation and rerun decisions
- Subagent Delegation — Bounded subagent spawning for parallelizable tasks
- Protocol System — Pluggable agent protocols (LLM API, Claude Code CLI, OpenCode CLI, ACP)
- Cost Controls — Per-session cost limits, token usage tracking, and circuit breakers
- Context Compression — Two-tier compression (truncation + LLM summarization) for long conversations
- Telemetry — Full event instrumentation via Telemetry
Add Agentic to your dependencies in mix.exs:
def deps do
[
{:agentic, "~> 0.2.0"}
]
endAgentic uses Recollect for knowledge persistence. Recollect supports two database backends:
Option A: libSQL (Recommended for new projects) Single-file SQLite with native vector support. Zero configuration.
def deps do
[
{:agentic, "~> 0.2.0"},
{:ecto_libsql, "~> 0.9"}
]
endConfigure Recollect:
config :recollect,
database_adapter: Recollect.DatabaseAdapter.LibSQL,
repo: MyApp.Repo,
embedding: [
provider: Recollect.Embedding.OpenRouter,
dimensions: 768
]Option B: PostgreSQL (For existing installations) Traditional server-based database with pgvector extension.
def deps do
[
{:agentic, "~> 0.2.0"},
{:postgrex, "~> 0.19"},
{:pgvector, "~> 0.3"}
]
endConfigure Recollect:
config :recollect,
database_adapter: Recollect.DatabaseAdapter.Postgres,
repo: MyApp.Repo,
embedding: [
provider: Recollect.Embedding.OpenRouter,
dimensions: 1536
]result = Agentic.run(
prompt: "Create a README.md file for my project",
workspace: "/path/to/your/project",
callbacks: %{
llm_chat: fn params -> MyLLM.chat(params) end
}
)
{:ok, %{text: response, cost: 0.05, tokens: 150, steps: 3}}Resume a previous session:
{:ok, result} = Agentic.resume(
session_id: "agx-...",
workspace: "/path/to/your/project",
callbacks: %{llm_chat: &my_llm/1}
)Scaffold a new workspace:
:ok = Agentic.new_workspace("/path/to/new/project")Agentic uses a stage pipeline architecture. Each stage wraps the next, receiving the context and a next function to call downstream.
The loop does not use a step counter. ModeRouter decides loop/terminate/compact based on the (mode, phase, stop_reason) triple. max_turns is a safety rail only.
- Engine builds the pipeline from stage modules
- Profile maps named profiles to stage lists and configuration
- Phase is a pure-function state machine with validated transitions
- Context is the loop state passed through every stage
ContextGuard → ProgressInjector → LLMCall → ModeRouter → TranscriptRecorder → ToolExecutor → CommitmentGate
| Profile | Behavior |
|---|---|
:agentic |
Full pipeline with tool use, progress tracking, context management (default) |
:agentic_planned |
Two-phase: plan → execute with tracking and verification |
:turn_by_turn |
LLM proposes changes, human approves before execution |
:conversational |
Simple call-respond, no tools |
:claude_code |
Claude Code CLI agent via local agent protocol |
:opencode |
OpenCode CLI agent via local agent protocol |
:codex |
Codex CLI agent via local agent protocol |
:acp |
Agent Client Protocol (JSON-RPC 2.0 over stdio) |
The callbacks map connects Agentic to your LLM provider and external systems:
:llm_chat—(params) -> {:ok, response} | {:error, term}
:execute_tool— custom tool handler (defaults to built-in tools):on_event—(event, ctx) -> :okfor UI streaming:on_response_facts—(ctx, text) -> :okfor custom fact processing:on_tool_facts—(ws_id, name, result, turn) -> :ok:on_persist_turn—(ctx, text) -> :ok:get_tool_schema—(name) -> {:ok, schema} | {:error, reason}:get_secret—(service, key) -> {:ok, value} | {:error, reason}:knowledge_search—(query, opts) -> {:ok, entries} | {:error, term}:knowledge_create—(params) -> {:ok, entry} | {:error, term}:knowledge_recent—(scope_id) -> {:ok, entries} | {:error, term}:search_tools—(query, opts) -> [result]:execute_external_tool—(name, args, ctx) -> {:ok, result} | {:error, reason}
Agentic ships with built-in tools:
read_file— Read file contents with optional line rangewrite_file— Create or overwrite filesedit_file— Apply targeted edits by exact text matchlist_files— Find files by glob patternbash— Execute shell commands in the workspace
delegate_task— Delegate to bounded subagents for parallelizable work
skill_list— List installed skillsskill_read— Read skill instructionsskill_search— Search for skills from public registriesskill_info— Fetch detailed info about a skill before installingskill_install— Install a skill from GitHubskill_remove— Remove an installed skillskill_analyze— Analyze a skill's model tier requirements
memory_query— Search the knowledge storememory_write— Persist content to the knowledge storememory_note— Store key-value pairs in fast in-process working memorymemory_recall— Search in-process working memory
search_tools— Discover available external toolsuse_tool— Execute an external tool (MCP, OpenAPI, integration)get_tool_schema— Get the full input schema for an external toolactivate_tool— Promote an external tool to first-class statusdeactivate_tool— Remove an activated tool
Extend via the skills system, tool activation, or custom callbacks.
- Transcript — Session history with event streaming
- Plan — Structured task plans (for
:agentic_plannedmode) - Knowledge — Persistent fact storage with search (
:localfile-based or:recollectgraph) - Context — Workspace context with pluggable backends
All backends have a :local file-based implementation.
Agentic.run(
prompt: "...",
workspace: "/path",
callbacks: %{llm_chat: &my_llm/1},
profile: :agentic, # which profile to use
mode: :agentic, # shorthand for profile
system_prompt: "...", # custom system prompt
history: [...], # prior messages
model_tier: :primary, # which model tier to use (manual mode)
model_selection_mode: :manual, # :manual (tier-based) or :auto (analysis-based)
model_preference: :optimize_price, # :optimize_price or :optimize_speed (auto mode)
model_filter: nil, # :free_only or nil (auto mode)
strategy: :default, # orchestration strategy
strategy_opts: [], # extra opts for strategy init
cost_limit: 5.0, # per-session cost limit in USD
session_id: "agx-...", # custom session ID
user_id: "user-123", # for API key resolution
plan: %{...} # pre-built plan (for agentic_planned)
)mix deps.get # Install dependencies
mix setup # Setup database
mix test # Run tests
mix format # Format code
mix dialyzer # Type checkBSD-3-Clause — See LICENSE for details.
Contributions welcome. Please ensure tests pass and dialyzer is clean before submitting PRs.