The AI industry built intelligence. It forgot to build judgment.
AWE is a Rust engine that transmutes raw LLM intelligence into calibrated wisdom -- consequence-aware, confidence-tracked, and compounding with every decision. It is model-agnostic, privacy-first, and designed to get wiser the more you use it.
Intelligence In --> Wisdom Out
Receive --> Interrogate --> Calibrate --> Consequent --> Synthesize --> Judge --> Articulate
| | | | | | |
Parse the Surface Assign Model 1st/ Connect Weigh Deliver
decision hidden calibrated 2nd/3rd order cross- trade-offs articulated
context assumptions confidence consequences domain and timing judgment
patterns
AWE requires an LLM to deliver accurate wisdom. The 7-stage pipeline uses an LLM for contextual analysis, consequence modeling, precedent synthesis, and calibrated judgment. Without one, AWE runs in limited algorithmic mode with generic heuristics.
| Provider | Setup | Quality |
|---|---|---|
| Anthropic Claude | export ANTHROPIC_API_KEY=sk-ant-... |
Best — full contextual wisdom |
| Ollama (local) | export AWE_OLLAMA_MODEL=llama3.2 |
Good — free, private, offline |
| None | — | Limited — generic heuristics only |
cargo install --path crates/awe-cli
# Configure your LLM (required for accurate wisdom)
export ANTHROPIC_API_KEY=sk-ant-...
# Transmute a decision
awe transmute "Should we rewrite our monolith as microservices?"
# Transmute with domain context
awe transmute "Should we scale to Kubernetes?" --domain infrastructure
# Feed back outcomes (closes the learning loop)
awe feedback dc_01JQ... --outcome confirmed --details "Strangler approach worked"
# View calibration accuracy for a domain
awe calibration --domain software-engineeringAWE is a Rust workspace with 12 crates -- 11 libraries composing the wisdom pipeline, plus the CLI.
crates/
awe-core/ Foundation types, traits, error handling
The periodic table of wisdom elements (Dc, Cq, Pr, As, Cf, Tf, Pv, Cx, Fb, Ap)
awe-epistemic/ Layer 1: Confidence calibration, knowledge boundaries, contradiction detection
awe-consequence/ Layer 2: Consequence chain modeling, reversibility scoring
awe-memory/ Layer 3: Wisdom Graph persistence (SQLite), pattern extraction
awe-judgment/ Layer 4: Trade-off mapping, timing intelligence, values reasoning
awe-synthesis/ Layer 5: Cross-domain analogy, meta-pattern recognition
awe-socratic/ Layer 6: Assumption surfacing, question reformulation, bias detection
awe-engine/ The complete 7-stage transmutation pipeline orchestrator
awe-protocol/ Wire format, message types, versioning
awe-cli/ CLI binary (`awe`)
flowchart TB
subgraph "Context Assembly"
A["4DA developer_dna<br/>(stack, concerns, gaps)"] --> C[identity.json]
B["System Probe<br/>OS / CPU / RAM / Models"] --> C
D["Wisdom Graph Stats<br/>decisions, feedback %"] --> C
C --> E[context.json]
end
subgraph "MCP Server"
F[awe_transmute] --> G{"buildContextFile()"}
G --> E
F --> H["AWE CLI Binary<br/>(awe transmute --context_file)"]
I[awe_set_identity] --> C
J[awe_scan_all] --> H
K[awe_feedback] --> H
end
subgraph "7-Stage Pipeline"
H --> L["1. Receive<br/>Parse query + detect domain"]
L --> M["2. Interrogate<br/>Bias detection + assumptions"]
M --> N["3. Calibrate<br/>Brier score + confidence"]
N --> O["4. Consequent<br/>DAG + reversibility scoring"]
O --> P["5. Synthesize<br/>Precedents + principles"]
P --> Q["6. Judge<br/>Trade-offs + timing"]
Q --> R["7. Articulate<br/>Final wisdom output"]
end
subgraph "Compounding Loop"
R --> S[("Wisdom Graph<br/>(SQLite)")]
S --> T["Principle Extraction<br/>7+ evidence threshold"]
T --> S
K --> S
S --> N
S --> P
end
subgraph "Data Harvest"
U[Git Repos] --> V["awe scan<br/>Decision Detection"]
V --> S
W["awe scan --infer<br/>Outcome Inference"] --> S
end
1. Context injection: 4DA surfaces developer DNA (primary stack, domain concerns,
knowledge gaps) which is written to identity.json. The MCP server's buildContextFile()
merges this with system probe data (OS, CPU, RAM, local models) and Wisdom Graph statistics
(decision count, feedback coverage) into context.json. This file is passed to the AWE CLI
via --context_file, where it deserializes into DeveloperContext and personalizes every
pipeline stage.
2. Compounding loop: Each transmutation produces a Decision stored in the Wisdom Graph.
When the user provides feedback (awe feedback), outcomes close the learning loop. The
principle extraction system (awe-memory) continuously analyzes decision-outcome pairs,
promoting patterns with 7+ evidence pairs to validated Principles. These principles
feed back into the Calibrate and Synthesize stages, making future transmutations wiser.
3. Auto-detection: awe scan walks git repositories, detecting architectural decisions
from commit messages and dependency changes. awe scan --infer checks whether those
decisions persisted (dependency still in manifest? branch merged or abandoned?) and
auto-generates feedback. This closes the compounding loop without requiring manual input.
Every transmutation produces structured elements, not prose:
| Symbol | Element | Description |
|---|---|---|
| Dc | Decision | A choice point -- action taken, deferred, or rejected |
| Cq | Consequence | An effect chain (1st/2nd/3rd order) |
| Pr | Principle | A pattern extracted from 3+ decision-outcome pairs |
| As | Assumption | Something treated as true without verification |
| Cf | Confidence | Calibrated probability tracked against reality |
| Tf | Trade-off | Two or more values in tension |
| Pv | Perspective | A stakeholder viewpoint changing what "good" means |
| Cx | Context | Temporal, situational, cultural factors |
| Fb | Feedback | Outcome data that closes the learning loop |
| Ap | Anti-pattern | A decision pattern that reliably produces bad outcomes |
Model-agnostic -- works with Claude, GPT, Ollama, or no LLM at all. Without an LLM, AWE runs in pure algorithmic mode using pattern matching and heuristics against the Wisdom Graph.
Privacy-first -- BYOK (Bring Your Own Key). All data stays local. The Wisdom Graph lives on your machine. API keys are never stored remotely.
Compounding -- AWE gets wiser with use. Every decision and its outcome feed the Wisdom Graph. Principles require 3+ evidence pairs. Anti-patterns require 3+ failure cases. Confidence is tracked against reality, not declared.
Set your API key via environment variable:
export ANTHROPIC_API_KEY=sk-ant-...Or configure via ~/.awe/config.toml:
[llm]
provider = "Anthropic"
model = "claude-sonnet-4-20250514"
[engine]
max_consequence_order = 3
socratic_by_default = true
min_calibration_samples = 10Supported providers: Anthropic, OpenAi, Ollama, Custom.
For Ollama (fully local, no API key needed):
[llm]
provider = "Ollama"
model = "llama3"
base_url = "http://localhost:11434"cargo build # Build all crates
cargo test # Run all tests
cargo test -p awe-core # Test a specific crate
cargo clippy --workspace # Lint
cargo doc --workspace --no-deps # Generate docsMinimum supported Rust version: 1.85 (edition 2024).
These are non-negotiable properties of AWE:
- Every Confidence value is tracked against outcomes -- no decorative confidence
- Principles require 3+ evidence pairs -- no premature generalization
- Anti-patterns require 3+ failure cases -- no single-incident overreaction
- The Wisdom Graph is append-mostly -- decisions and feedback are never deleted
- AWE is model-agnostic -- works with any LLM or no LLM
- All data stays local -- BYOK for any API keys
spec/ontology.md-- Canonical element definitionsspec/protocol.md-- AWE Protocol specification
FSL-1.1-Apache-2.0
Copyright 2026 4DA Systems Pty Ltd. Source-available under the Functional Source License. Converts to Apache 2.0 after two years.