AOS is a research prototype for an agent operating system built around deterministic runtime semantics, artifact-based collaboration, and replaceable inference backends.
The codebase is intentionally exploratory, but the core runtime is implemented and covered by Go tests. The current repo includes a file-backed kernel, IPC bus, artifact store, lease manager, capability model, tool runtime, workflow engine, configurable local LLM integration, and an interactive terminal UI.
- Research prototype, not a polished product release
- Core Go packages test cleanly with the repo-local cache path used by
make test - Active areas include workflow tooling, skills, local TUI ergonomics, and evaluation/demo workflows
- Generated live-run artifacts are treated as disposable outputs, not durable source assets
See STATUS.md for the current project snapshot and RELEASE_CHECKLIST.md for repo hygiene before sharing milestones.
cmd/entrypoints for the CLI, TUI, smoke tests, demos, and mini-project runnersinternal/runtime, kernel, workflow, inference, tool, and TUI implementation packagesskills/local skill definitions used by workflow or interactive sessionsworkflows/JSON workflow specs for multi-agent experiments and demosreports/curated milestone reports and a small set of durable reference outputs
make testmake fmtgo run ./cmd/aosgo run ./cmd/aos-tui --workdir .
make test uses repo-local GOCACHE and GOTMPDIR paths so tests run reliably in sandboxed environments too.
AOS now uses backend-agnostic env vars for live model selection:
AOS_LLM_BACKENDwith values such asvllm,ollama, oropenai-compatibleAOS_LLM_BASE_URLAOS_LLM_MODELAOS_LLM_API_KEYAOS_LLM_TIMEOUT_SECONDS
Legacy AOS_VLLM_* vars are still supported as fallback for existing scripts.
Example using Ollama:
AOS_LLM_BACKEND=ollama \
AOS_LLM_BASE_URL=http://lenovo-px.local:11434 \
AOS_LLM_MODEL=qwen3.6:35b \
go run ./cmd/aos run --workflow ./workflows/rasterizer-optimized-tail.json --output ./reports/rasterizer-team-demo/final-summary.txt