A pattern for codifying domain knowledge as version-controlled, machine-readable artifacts alongside code — enabling AI agents and human developers to acquire the business context necessary to work correctly within a codebase.
For decades, we've optimized repository structure for human developers — clean modules, good naming, design patterns — all to reduce friction for a new hire or for yourself six months later. With AI agents becoming primary contributors to codebases, the intended audience has shifted. An AI agent has no institutional memory, cannot ask a colleague, and starts every session from zero. The assumptions behind human-first repo design break down entirely.
AI coding agents write syntactically correct but semantically wrong code because they lack business context. They don't know your domain model, your business rules, your regulatory constraints, or why your architecture looks the way it does. This knowledge traditionally lives in the distributed minds of contributing developers, outdated wikis, and buried Slack threads. When a senior dev leaves, it walks out the door. When an AI agent starts a session, it simply doesn't exist.
Existing solutions address part of this:
- AGENTS.md / CLAUDE.md tell agents how to work in the codebase (build commands, code style, workflow)
- SDD frameworks (GSD, Spec Kit, BMAD) tell agents what to build right now (specs, plans, tasks)
- Nothing tells agents why the system works the way it does (domain model, business rules, constraints, decisions)
Domain Context fills that gap — not by documenting domain knowledge (wikis already fail at that) but by codifying it: committing it to version control alongside the code, reviewing it in PRs, tracking it for freshness, and structuring it for AI consumption. The pattern is AI-first — designed for machine consumption, with human readability as a constraint, not the other way around.
Every project has three categories of knowledge that AI agents need:
| Concern | Content | Lifespan | Existing Solutions |
|---|---|---|---|
| The How | Build commands, code style, workflow | Lifetime of project | AGENTS.md, CLAUDE.md, .cursorrules |
| The What | Feature specs, task plans, roadmaps | Per-feature | GSD, Spec Kit, BMAD, Kiro |
| The Why | Domain model, business rules, ADRs, constraints | Lifetime of project | Domain Context |
The What is ephemeral — a spec for "add Stripe integration" matters during that feature's development. The Why is durable — "subscriptions follow a Trial → Active → Canceled lifecycle" is true regardless of which feature you're building.
project/
├── AGENTS.md # The How (always loaded)
├── ARCHITECTURE.md # Navigation map (loaded per session)
├── .context/ # The Why (loaded on demand)
│ ├── MANIFEST.md # Discovery index — AI reads this first
│ ├── domain/ # Business domain concepts
│ │ ├── subscriptions.md
│ │ └── invoicing.md
│ ├── decisions/ # Architecture Decision Records
│ │ └── 001-event-sourcing.md
│ └── constraints/ # External forces
│ └── payment-regulations.md
├── .context.local/ # Confidential context (gitignored)
│ └── domain/
│ └── pricing-model.md
├── src/
│ ├── billing/
│ │ ├── CONTEXT.md # Module-scoped business context
│ │ └── ...
│ └── ...
└── scripts/
└── sync-context.sh # Sync confidential context from private store
An AI agent working on a billing bug:
- Reads AGENTS.md (always loaded) → knows build/test commands
- Reads ARCHITECTURE.md → sees billing depends on subscriptions and payments
- Scans MANIFEST.md → finds
domain/invoicing.mdandconstraints/payment-regulations.mdare relevant - Reads src/billing/CONTEXT.md → learns business rules, module boundaries, non-obvious decisions
- Loads the specific domain file(s) needed → gets the full business context
Total cost: ~2,000-4,000 tokens (~1.5-2% of a 200k context window). Compare that to an agent spending 10,000+ tokens on exploratory file reads, trying to reverse-engineer business rules that might not even be visible in the code.
# Copy the template directory into your project
cp -r template/.context/ your-project/.context/
cp template/ARCHITECTURE.md your-project/
cp template/scripts/sync-context.sh your-project/scripts/
# Edit the files for your project- Create
ARCHITECTURE.mdwith a module map and key boundaries. - Create
.context/MANIFEST.mdwith the skeleton structure. - Add
CONTEXT.mdto your 2-3 most business-critical source directories. - Add
.context/domain/files for your 2-3 most important domain concepts. - Add the "Project Context" pointer to your AGENTS.md / CLAUDE.md.
Don't try to document everything at once. Start with the knowledge that causes the most bugs when AI (or new developers) don't have it.
- AI-first, not human-first. Every structural decision — the manifest index, token budgets, discrete business rules, explicit module boundaries — is designed for an AI agent operating within a context window. Humans can read everything; the structure serves AI.
- Codification, not documentation. Context files are version-controlled, code-reviewed, freshness-tracked, and structured for machine consumption. This is not a wiki — it is load-bearing infrastructure that agents depend on to produce correct output.
- Progressive disclosure. Agents scan a manifest to decide what to load, not everything at once.
- Graceful degradation. Missing context means "ask the developer," not "hallucinate."
- AI-managed, human-governed. AI maintains the files; humans review and approve changes.
- Framework agnostic. Plain markdown files. Works with any AI agent, IDE, or SDD framework.
- Confidential context support. A gitignored
.context.local/overlay for sensitive business logic.
Domain Context is complementary to spec-driven development, not a replacement. SDD frameworks manage The What (current development execution). Domain Context manages The Why (durable domain knowledge).
The integration is bidirectional:
- SDD consumes .context/: Planning phases read domain files to ground specs in existing knowledge.
- SDD contributes to .context/: After features complete, new domain knowledge is extracted from specs into the durable layer.
See SPEC.md § 10.1 for detailed integration patterns.
The full specification is in SPEC.md. It covers:
- The three concerns model and design principles
- Complete directory structure and file format specifications
- Confidentiality model with access tiers and the overlay pattern
- Freshness tracking and maintenance workflows
- Language-specific guidance for Python, UI, and AI tooling projects
- Integration points with SDD frameworks, CI/CD, and agent hooks
- Anti-patterns and adoption paths
- Python SaaS Platform — A subscription management platform demonstrating the full pattern with domain models, ADRs, constraints, and module context files.
- React Dashboard — A frontend application showing how the pattern adapts for UI projects with user flows, design system context, and component documentation.
This specification builds on work from several communities:
- Codified Context Infrastructure (Vasileiadis, 2026) — Tiered architecture for machine-consumable project knowledge
- Codebase Context Specification (Agentic Insights, 2024) —
.contextdirectory convention for AI-readable project documentation - AGENTS.md (OpenAI, 2025) — Vendor-neutral agent instruction files
- llms.txt (Howard / Answer.AI, 2024) — Machine-readable documentation standard
- Architecture Decision Records (Nygard, 2011) — Lightweight decision capture
- Advanced Context Engineering (HumanLayer, 2025) — Context window management for coding agents
- SDD frameworks: GSD, Spec Kit, BMAD, Kiro
The specification (SPEC.md) is licensed under CC-BY-4.0. Template files, examples, and tooling code are licensed under MIT.
This is an early-stage specification. Feedback, questions, and contributions are welcome via issues and pull requests. See the open questions in SPEC.md § 13 for areas actively seeking community input.