A high-performance, zero-trust AI agent built in Rust. Single binary, dual mode: interactive CLI assistant and Concourse CI resource type.
- Zero-Trust Sandbox — ALL tool executions run through 5 isolation layers (best-effort; the runtime applies these protections when available):
- cgroups v2 resource limits (
systemd-run --scope) — memory/PID limits - Network isolation (namespace or net-guard) (
unshare --user --netorrune-net-guard) — namespace-based isolation or domain-allowlist filtering - Seccomp BPF syscall filter (
rune-seccomp) — syscall filtering - Landlock filesystem restriction (
rune-landlock) — file access limits - DNS / Domain allowlist — selective outbound network access (configured via
allowed_domains)
- cgroups v2 resource limits (
- Tool Calling — 6 built-in tools:
read_file,write_file,list_dir,execute_cmd,fetch_url,inspect_process - Command Policy — Three modes:
confirm(interactive Y/n/A),allowlist(whitelist only),unrestricted - Skills System — Load contextual abilities via
@skill_namein prompts - Provider Registry — GitHub Copilot (auto token refresh), OpenRouter, Google Gemini, any OpenAI-compatible
- MCP Client — Stdio-based JSON-RPC client for Model Context Protocol servers
- Streaming Output — Interactive mode displays tokens incrementally as they arrive
- Parallel Tool Calls — Multiple independent tool calls execute concurrently
- Context Window Management — Auto-compact when context exceeds 85% of model limit
- Vision / Image Input — Multi-modal messages with text + images (base64 or URL)
- Native Gemini Provider — Google Gemini API with automatic message format conversion
- Wildcard Domains —
*.github.comin allowed_domains matches all subdomains - Concourse CI — Same binary acts as a resource type (
check,in,out) via symlink - Trace Recording — JSON trace files with sensitive info redaction
- JSON Output —
--jsonflag for machine-readable output - Non-Interactive Pipe Mode — piped stdin runs once and exits; no interactive prompt loop
# Build (produces 4 binaries: rune, rune-seccomp, rune-landlock, rune-net-guard)
cargo build --release
# Interactive setup
./target/release/rune init
# Or configure manually
mkdir -p ~/.rune
cat > ~/.rune/rune.toml << 'EOF'
model = "gpt-4o"
api_key = "ghu_your_github_copilot_pat"
skills_dir = "./skills"
[policy]
mode = "confirm"
allowed_domains = ["wttr.in"]
allowed_commands = ["ls", "cat", "head", "ps", "echo", "uname", "free", "df", "date", "hostname"]
EOF
# Run
./target/release/runeRune is available as a container image at ghcr.io/fourdollars/rune:
# First-time setup — creates ~/.rune/rune.toml interactively
docker run --rm -it -v ~/.rune:/home/rune/.rune ghcr.io/fourdollars/rune init
# Interactive mode (mount config)
docker run --rm -it -v ~/.rune:/home/rune/.rune ghcr.io/fourdollars/rune
# With skills directory
docker run --rm -it \
-v ~/.rune:/home/rune/.rune \
-v ./skills:/home/rune/skills \
ghcr.io/fourdollars/rune
# Mount a project directory as working directory
docker run --rm -it \
-v ~/.rune:/home/rune/.rune \
-v $(pwd):/workspace -w /workspace \
ghcr.io/fourdollars/rune
# Pipe mode (one-shot, non-interactive)
echo "Summarize the README.md in this project" | \
docker run --rm -i \
-v ~/.rune:/home/rune/.rune \
-v $(pwd):/workspace -w /workspace \
ghcr.io/fourdollars/rune --json --yesAvailable tags: latest (Debian-based, built from main branch), <sha> (specific commit).
ᛟ ᚺ ᛊ ᛏ ᛒ ᛖ ᚹ ᛗ ᛚ ᛝ ᛟ
┌───────────────────────────────────┐
│ ᚱ ᚢ ᚾ ᛖ │
│ Zero-Trust AI Agent │
│ v0.1.0 ⚡ sandboxed │
└───────────────────────────────────┘
ᛟ ᚺ ᛊ ᛏ ᛒ ᛖ ᚹ ᛗ ᛚ ᛝ ᛟ
ᚱ› Show me hostname and disk usage
⚙ execute_cmd({"cmd": "hostname"})
✓ execute_cmd...ok
⚙ execute_cmd({"cmd": "df -h /"})
⚠ Execute? [Y/n/A(lways)] A
permanently allowed → saved to ~/.rune/rune.toml
+ command 'df' → allowed_commands
✓ execute_cmd...ok
────────────────────────────────────────────────────────────
- Hostname: rune-dev
- Disk: 42G used / 100G total
────────────────────────────────────────────────────────────
📋 commands executed: 2
▸ hostname
▸ df -h /
⚡ [2 steps | 650 tokens | 2 tool calls]
| Command | Description |
|---|---|
<text> |
Send a prompt to the agent |
/help |
Show help |
/info |
Current session status (model, context, skills) |
/info context |
Detailed context breakdown |
/policy |
Show policy summary |
/policy full |
Full sandbox status |
/config |
Show configuration |
/tools |
List available tools |
/skills |
List available skills |
/trace |
Trace recording status |
/compact |
Compress conversation context |
/reset |
Clear conversation history |
/multi |
Multi-line input (end with ;;) |
/version |
Show version |
/clear |
Clear screen |
/exit |
Quit |
In interactive mode, use ↑/↓ to browse previous prompts. History is persisted across sessions in ~/.rune/history.
# ~/.rune/rune.toml
model = "gpt-4o"
api_key = "ghu_..." # GitHub Copilot (auto-detected)
# provider = "github-copilot" # explicit (auto-detected if omitted)
# api_key = "AIza..." # Google Gemini (provider = "gemini")
# api_key = "sk-or-..." # OpenRouter (provider = "openrouter")
# base_url = "https://..." # Custom endpoint (not needed for Copilot/Gemini)
skills_dir = "./skills"
log_level = "warn"
# max_steps = 20 # optional (unlimited if not set)
# token_budget = 16384 # optional (unlimited if not set)
# timeout_secs = 30 # optional (unlimited if not set)
trace = false
context_window = 128000 # model context window in tokens
# compact_threshold = 0.85 # auto-compact at this % of context_window
# compact_keep_last = 6 # keep last N messages when compacting
[policy]
mode = "confirm" # confirm | allowlist | unrestricted
allowed_commands = ["ls", "cat", "head", "ps", "echo"]
allowed_domains = ["wttr.in", "api.github.com"]
denied_syscalls = ["ptrace", "mount", "kexec_load", "bpf", "setns"]
allowed_paths_rw = ["/tmp"]
allowed_paths_ro = ["/bin", "/usr", "/lib"]
denied_paths = ["/root", "/etc/shadow"]
max_memory_mb = 512
max_pids = 64
# MCP servers (optional)
# [[mcp_servers]]
# name = "example"
# command = "node"
# args = ["server.js"]
# required = false| Variable | Description |
|---|---|
RUNE_API_KEY |
LLM provider API key |
RUNE_PROVIDER |
Provider name (github-copilot, gemini, openai, openrouter, ollama, anthropic) |
RUNE_MODEL |
Model name |
RUNE_BASE_URL |
Provider base URL |
RUNE_POLICY_MODE |
Policy mode override |
RUNE_LOG_LEVEL |
Log level |
RUNE_TRACE |
Enable trace (true/false) |
RUNE_CONTEXT_WINDOW |
Model context window in tokens (default: 128000) |
RUNE_COMPACT_THRESHOLD |
Auto-compact trigger fraction (default: 0.85) |
RUNE_JSON_OUTPUT |
JSON output mode (true / false, also accepts 1 / 0) |
RUNE_YES |
Auto-approve dangerous tool execution (true / false, also accepts 1 / 0) |
Every tool invocation passes through up to 5 isolation layers:
┌─────────────────────────────────────────────┐
│ Layer 1: cgroups (memory + pids limits) │
│ Layer 2: rune-net-guard (seccomp notif) │
│ Layer 3: Seccomp BPF (syscall filter) │
│ Layer 4: Landlock (filesystem restriction) │
│ Layer 5: DNS allowlist (domain control) │
└─────────────────────────────────────────────┘
ᚱ› (read /etc/hostname)
⚙ read_file({"path": "/etc/hostname"})
✓ read_file...ok → "u"
ᚱ› (write to /tmp)
⚙ write_file({"path": "/tmp/test.txt", "content": "hello"})
✓ write_file...ok → "Written 5 bytes"
ᚱ› (run allowed command)
⚙ execute_cmd({"cmd": "echo hello"})
✓ execute_cmd...ok → "hello"
ᚱ› (fetch non-allowed URL)
⚙ fetch_url({"url": "https://example.com"})
✗ BLOCKED: domain 'example.com' is not in allowed_domains
ᚱ› (run non-allowed command in allowlist mode)
⚙ execute_cmd({"cmd": "rm -rf /"})
✗ BLOCKED by policy: command 'rm' is not in allowed_commands
ᚱ› (read sensitive file)
⚙ read_file({"path": "/etc/shadow"})
✗ Permission denied (Landlock + user namespace)
ᚱ› (ptrace attempt inside sandbox)
→ Seccomp BPF: Operation not permitted
| Mode | Behavior | Default for |
|---|---|---|
confirm |
Ask user Y/n/A(lways) before dangerous tools | Interactive CLI |
allowlist |
Auto-execute within allowlist, block everything else | Pipe mode, Concourse CI |
unrestricted |
All policy checks skipped | Opt-in via --policy-mode unrestricted |
Defaults by context:
- Interactive CLI (
rune):confirm— prompts before each dangerous tool call - Pipe mode (
echo "..." \| rune):allowlist— runs within configured allowlists - Concourse CI (check/get/put):
allowlist— enforces sandbox policy from pipeline YAML
Override with --policy-mode <mode>, RUNE_POLICY_MODE=<mode>, or in rune.toml:
[policy]
mode = "unrestricted"In Concourse CI pipelines, set via source.sandbox.policy_mode:
resources:
- name: my-agent
type: rune-agent
source:
api_key: ((key))
sandbox:
policy_mode: unrestrictedecho "What is 2+2?" | rune --json{"answer":"4","steps":1,"tokens":348,"tools_used":[]}# Machine-readable output
rune --json
# Skip confirm prompts for dangerous tools
rune --yes
# or
rune -yWhen stdin is piped into Rune, it runs in one-shot non-interactive mode:
echo "Get weather for Taoyuan from wttr.in" | rune --json --yesBehavior in pipe mode:
- reads all stdin as a single prompt
- does not enter the interactive prompt loop
- exits immediately after one run
- if confirm mode would require approval, Rune stops with an error unless
--yesis provided
skills/
├── sysadmin/
│ └── SKILL.md
└── launchpad/
├── SKILL.md
└── references/
Use @skill_name in prompts:
ᚱ› Use @sysadmin skill. Check disk usage.
📚 Loaded skill: sysadmin
For scripting, combine skills with pipe mode:
echo "Use @sysadmin skill. Check disk usage." | rune --json --yesThe simplest possible pipeline using Rune as a Concourse CI resource type:
resource_types:
- name: rune-agent
type: registry-image
source:
repository: ghcr.io/fourdollars/rune
tag: latest
resources:
- name: weather
type: rune-agent
check_every: 1h
source:
api_key: ((copilot-pat))
model: gpt-4o-mini
prompt: "Fetch the weather for Taoyuan from wttr.in using curl."
policy:
allowed_commands: ["curl"]
allowed_domains: ["wttr.in"]
jobs:
- name: weather-check
plan:
- get: weather
trigger: true
- task: show
config:
platform: linux
image_resource:
type: registry-image
source: { repository: ghcr.io/fourdollars/rune, tag: latest }
inputs: [{name: weather}]
run:
path: sh
args: [-c, "cat weather/response.txt"]That's it! Rune handles:
- AI prompt → tool selection → sandboxed execution → response
- Network filtering (only
wttr.inallowed) - Automatic version tracking (content hash)
Rune acts as a content-aware Concourse CI resource type. All three resource steps (check / in / out) run through the same sandboxed Rune agent pipeline as pipe mode.
checkexecutes the prompt, hashes the final answer, and returns{"ref":"sha256:..."}inre-executes the prompt and writespayload.json+response.txtoutexecutesparams.promptand returns a new version
When tool usage is needed, configure sandbox allowlists in the resource source (domains, paths, commands via Rune policy).
resource_types:
- name: rune-agent
type: registry-image
source:
repository: ghcr.io/fourdollars/rune
tag: latest
resources:
- name: ai-news
type: rune-agent
source:
api_key: ((copilot_key)) # ghu_/ghp_ auto-refreshed
model: gpt-4o-mini
prompt: "List top 3 trending AI topics today. One line each."
policy:
allowed_commands: ["curl", "ls", "cat"]
allowed_domains: ["news.google.com", "api.github.com"]
jobs:
- name: news-digest
plan:
- get: ai-news # triggers when content changes
trigger: true
- task: translate
config:
platform: linux
image_resource:
type: registry-image
source: { repository: ghcr.io/fourdollars/rune, tag: latest }
inputs: [{name: ai-news}]
run:
path: sh
args: [-c, "cat ai-news/response.txt"]
- name: ask-ai
plan:
- put: ai-news
params:
prompt: "Translate to zh-TW: AI is transforming healthcare."| Mode | Behavior |
|---|---|
check |
Run sandboxed agent on source.prompt → sha256(final answer) → version {"ref":"sha256:..."} |
in (get) |
Run sandboxed agent again → write payload.json + response.txt to dest dir |
out (put) |
Run sandboxed agent on params.prompt → return version + print response to build log |
GitHub Copilot tokens (ghu_/ghp_) are auto-detected and refreshed. Google Gemini (AIza* keys) uses the native Gemini API format. OpenAI, OpenRouter, Ollama, Anthropic, and any OpenAI-compatible endpoint work via base_url. Use --provider <name> or provider = "..." in config to override auto-detection.
| File | Content |
|---|---|
payload.json |
{prompt, response, ref, model, timestamp} |
response.txt |
Raw LLM response text |
## Architecture
src/ ├── main.rs — Entry point, routing ├── agent/mod.rs — Agent loop, tool orchestration, confirm flow ├── cli/mod.rs — Interactive CLI, commands, JSON mode ├── concourse/mod.rs — Concourse CI check/in/out (sandboxed agent pipeline) ├── config/mod.rs — Layered config + PolicyConfig ├── mcp/mod.rs — MCP client (stdio JSON-RPC) ├── precommands.rs — Pre-command execution ├── provider/mod.rs — LLM providers + retry backoff ├── sandbox/mod.rs — 5-layer sandbox orchestration ├── setup.rs — rune init wizard ├── skills/mod.rs — SKILL.md loader ├── tools/mod.rs — 6 built-in tools (all sandboxed) ├── embedding/mod.rs — Embedding engine + vector store ├── trace/mod.rs — JSON trace + redaction └── bin/ ├── rune-seccomp.rs — Seccomp BPF helper ├── rune-landlock.rs — Landlock filesystem helper └── rune-net-guard.rs — Seccomp user notification network filter
## Development
```bash
cargo build --release # Build all 4 binaries
cargo test # Unit tests (124)
./tests/e2e.sh # E2E tests (18)
make check-all # Both
- Rust 1.78+ (tested on 1.94-nightly)
- Linux kernel 5.13+ (Landlock ABI), 5.0+ (seccomp user notification)
curlon PATH (only needed inside sandbox forfetch_urltool) (only needed for sandboxed fetch_url tool)
MIT