Skip to content

Sris945/SAGE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

SAGE

Self-improving Autonomous Generation Engine

prompt → production

Turn a plain-English goal into a plan, code, tests, and verification.


SAGE — autonomous run overview

SAGE is a Python CLI built around a LangGraph-style orchestrator: specialized agents (planner, coder, reviewer, test engineer, …), a model router (models.yaml), multi-layer memory, and optional Ollama for local LLMs. Drive it from a TTY shell (slash commands, chat threads, intent routing) or headless with sage run.


⚠️ NOTE: SAGE is an experimental project and currently a work in progress.
The architecture is evolving rapidly and many components will undergo significant refinements over time.


Screenshots

Interactive shell · /commands Model routing & profiles
Slash commands and shell Model routing
Skills & /skill discovery sage memory — memory layers
Skills panel SAGE memory layers listing

Five static assets live in images/ (Commands, Memory, Model, SAGE, Skills).


What you can do

Run the full pipeline sage run "Add JWT auth to the API" — planner → DAG → code → review → tests → verify → memory.
Human checkpoints Default --research: review the plan (a / r / e for approve / reject / edit plan file), then continue. --auto — fewer interactive gates; --silent — autonomous, skips failed tasks. --no-clarify skips planner Q&A.
Bootstrap a project folder sage init — creates .sage/, memory/, default rules, pytest.ini hints. Run it in the repo you want SAGE to edit (not necessarily the SAGE source tree).
Use the interactive shell Run sage with no args — / opens a command menu, /chat starts a local LLM thread (saved under .sage/chat_sessions/).
Configure models Per-role primary/fallback: ~/.config/sage/models.yaml (or $SAGE_MODELS_YAML), or bundled defaults — see docs/models.md.
Rules & memory sage rules / sage rules validate / sage rules add "…"; sage memory / sage memory digest — see docs/CLI.md.
Benchmark & RL sage bench; sage rl export, train-bc, train-cql; scripts/train_routing_policy.py — see docs/getting_started.md.
Hardware & models sage prep or `sage setup scan
Ops & trust sage session (reset/handoff), sage cron weekly-memory-optimizer, `sage eval golden

Requirements

  • Python 3.10+
  • Optional: Ollama for local models (tags must match ollama list and your models.yaml)

Install (SAGE repository clone)

startup.sh and startup.ps1 live only in the SAGE repository root — not inside arbitrary project directories. Clone or unpack SAGE, then:

Option A — bootstrap script (easiest)

Platform Steps
Linux / macOS From the repo root: bash startup.sh then source .venv/bin/activate
Windows .\startup.ps1 then .\.venv\Scripts\Activate.ps1

Creates .venv, installs the package in editable mode with dev deps (pip install -e ".[dev,tui]").

Option B — manual

python3 -m venv .venv && source .venv/bin/activate   # Windows: .venv\Scripts\activate
pip install -U pip wheel setuptools && pip install -e ".[dev,tui]"

([tui] adds Textual for sage tui; omit for minimal install: pip install -e .)

Pull models (Ollama)

ollama pull qwen2.5-coder:1.5b
ollama pull nomic-embed-text

Tiers, VRAM, and 404 model not founddocs/models.md.


Your project directory (typical flow)

SAGE edits the current working directory. For a new sandbox:

mkdir -p ~/myproject && cd ~/myproject
sage init          # .sage/, memory/, rules scaffold
export SAGE_MODEL_PROFILE=test   # optional: one small Ollama model for every role (laptop/CI)
sage doctor        # Python, venv hint, Ollama, models.yaml
sage run "Create src/hello.py with greet() and tests/test_hello.py" --auto
  • SAGE_MODEL_PROFILE=test — forces the test profile in bundled/user models.yaml (good when you want a single small local model).
  • After each run, metrics are written to .sage/last_run_metrics.json (session id, task counts, model histogram, etc.). Set SAGE_RUN_OUTPUT=full for more detail in the end-of-run report; debug prints verbose verify lines.

Full env reference → docs/CLI.md. Install details → docs/INSTALL.md.


First steps (after install)

cd /path/to/SAGE          # your clone
source .venv/bin/activate
sage doctor               # environment + optional Ollama checks
sage                      # interactive shell — try /commands
sage run "Scaffold a minimal FastAPI app with /health" --auto
sage status               # session snapshot (memory/system_state.json)

Doc links in the shell: set export SAGE_REPO_URL=https://github.com/your-org/your-fork so /commands footer URLs point at your fork.


Interactive shell (high level)

  • Built on prompt_toolkit: type / for completions; Enter submits the line.
  • /chat — multi-turn local LLM thread; can attach to the next run via SAGE_CHAT_ATTACH_TO_RUN (see docs/CLI.md).
  • agent / agent clear — build mode reminders and clearing attached chat context.

Architecture (at a glance)

  • Orchestrator: src/sage/orchestrator/workflow.py
  • Routing: src/sage/orchestrator/model_router.py
  • Agents: src/sage/agents/
  • Execution: src/sage/execution/
  • Memory & RAG: src/sage/memory/
  • CLI: src/sage/cli/

Spec vs shipped features: docs/ARCHITECTURE_STATUS.md — use this for implementation truth vs long-form architecture docs.

Diagrams → docs/architecture.md, docs/architecture_diagram.md. Events → docs/event_bus.md.


Documentation map

Guide What it covers
docs/README.md Full index of every file in docs/ (grouped by topic)
docs/INSTALL.md Bootstrap scripts, Windows vs Linux, pip
docs/CLI.md Shell, env vars, rules, memory digest, run output
docs/models.md models.yaml, Ollama tags, VRAM, bench timeouts
docs/getting_started.md bench, rl, sim, training script
docs/ARCHITECTURE_STATUS.md Spec parity / feature status
docs/architecture.md Design entrypoints
docs/architecture_diagram.md Diagrams
docs/event_bus.md Event bus semantics
docs/TRUST_AND_SCALE.md Policy, trust
docs/LIVE_TESTING.md Live Ollama, scripts/live_verify.sh
docs/release_checklist.md Release candidate checklist
CONTRIBUTING.md Tests, Ruff, Mypy (CI), CI workflows

Architecture spec (design contract): sage plan/SAGE_ARCHITECTURE_V1_FINAL.md.


Repository layout

src/sage/          # Application package (CLI, orchestrator, agents, memory, rl, …)
docs/              # Guides
images/            # README screenshots (this folder)
tests/             # pytest
scripts/           # Helpers (e.g. train_routing_policy.py)
pyproject.toml     # Packaging
startup.sh / .ps1  # Run from repo root only

Full tree → project_structure.md.


If SAGE fits your workflow, show support with a star on the repository — it helps others discover the project.

Star History

Star History Chart

If SAGE fits your workflow, show support with a star on the repository — it helps others discover the project.


Contributing

See CONTRIBUTING.md — unit tests, ruff check / ruff format, Mypy allowlist, benchmarks, live Ollama bar.

About

SAGE — Self-improving Autonomous Generation Engine. Prompt → production software using coordinated AI agents. Local-first multi-agent coding pipeline with planning, coding, execution, and self-healing debugging.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages