Give your AI agent a real map of your codebase β not a polite fiction.
Reads Understand-Anything knowledge graphs β emitsAGENTS.md, CLAUDE.md, docs/agents/.Every line traces back to a graph node or edge. Nothing is invented.
Most AI context files are one of two things: empty scaffolds stuffed with [to fill] placeholders, or
confident-sounding LLM boilerplate with no connection to actual code. Either way, they actively hurt. A 2025 ETH Zurich
study found auto-generated context files reduced task success in 5 of 8 test settings and added up to 4 extra
steps per task.
The problem is not the format. It is the evidence.
| Typical context tooling | agent-context |
|---|---|
| Generates plausible-sounding filler | Emits only what the graph supports |
Requires manual editing of dozens of [to fill] slots |
One [to fill] remains β Mock stance in testing.md β because that genuinely needs you |
| Safety rules live as polite markdown suggestions | Safety rules are wired as deny-list hooks in .claude/settings.json |
| One giant file loaded every session | Three-tier loading: always-on kernel, path-scoped rules, on-demand depth |
| Re-run to get the same scaffold again | Freshness check β warns when the graph is stale vs HEAD |
βββββββββββββββββββββββ ββββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββ
β Your repo β β Understand-Anything β β agent-context β
β β β β β β
β src/ β β /understand β β /agent-context β
β package.json βββββββββββΊβ knowledge-graph.json βββββββββββΊβ AGENTS.md β
β go.mod β β β β docs/agents/ β
β ... β β /understand-domain β β .claude/settings.json β
β β β domain-graph.json βββββββββββΊβ CLAUDE.md β
βββββββββββββββββββββββ ββββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββ
your code real analysis graphs context files
Understand-Anything does the heavy lifting: it walks your source, identifies architectural layers, maps import relationships, and builds a dependency-ordered guided tour of the codebase. agent-context is a translator β it takes those graphs and renders them into the specific files AI agents read. The two tools are loosely coupled; you can regenerate either independently.
your-repo/
β
βββ AGENTS.md βββ π the kernel (<100 lines, loaded every session)
βββ CLAUDE.md βββ π one-line shim: @AGENTS.md
βββ CLAUDE.local.md βββ π gitignored personal prefs
βββ CONVENTIONS.md βββ π starter stub | legacy AGENTS.md copy | skipped (user choice on first run)
β
βββ .claude/
β βββ settings.json βββ π‘οΈ deny-list hooks (rm -rf, force-push, .env writes...)
β
βββ .cursor/rules/
β βββ agents.mdc βββ π Cursor rules (synced from AGENTS.md)
β
βββ .github/
β βββ copilot-instructions.md βββ π GitHub Copilot (synced from AGENTS.md)
β
βββ .codex/
β βββ instructions.md βββ π OpenAI Codex (synced from AGENTS.md)
β
βββ .aider.conf.yml βββ π Aider config (points to CONVENTIONS.md)
β
βββ docs/agents/ βββ π on-demand depth (loaded only when referenced)
βββ architecture.md project overview Β· stack Β· quick start Β· layer map Β· guided tour
βββ flow.md domain flows Β· entry points Β· triggers (single file when β€8 flows)
βββ flows/ folder mode (>8 flows): index.md + one file per domain
β βββ index.md domains, flow counts, cross-domain edges
β βββ <domain>.md per-domain flows with entry points and triggers
βββ patterns.md complexity hotspots Β· function exemplars Β· hub imports
βββ glossary.md domain vocabulary from the domain graph (or stub)
βββ conventions.md team coding standards (if CONVENTIONS.md exists)
βββ testing.md runner Β· file layout Β· single-test command
βββ tech-debt.md known gotchas format (you fill this over time)
With --with-ci, two additional files are generated:
βββ .github/workflows/
β βββ agent-context-freshness.yml βββ βοΈ CI freshness check
βββ hooks/
βββ check-freshness.sh βββ βοΈ pre-commit hook
agent-context does not analyse code. Understand-Anything does.
/plugin marketplace add Lum1104/Understand-Anything
/plugin install understand-anythingThen, inside the repo you want to generate context for:
/understandThis produces ./understand-anything/knowledge-graph.json. For a populated glossary and business flow map, also run:
/understand-domain/plugin marketplace add jonaskahn/agent-context
/plugin install agent-context/agent-contextAll output files are written in one pass.
Every AI agent loads AGENTS.md on every turn. It is deliberately kept under 100 lines, because every irrelevant token
in a shared attention window degrades output quality across all frontier models. When the kernel is tight and accurate,
agents stop asking "where does X live?" and start making correct edits on the first try.
Each section is derived directly from graph data β never invented:
| # | Section | Source |
|---|---|---|
| 1 | Architectural Altitude | Tour steps β the dependency-ordered walkthrough of the codebase |
| 2 | Module Map | Layers β pre-computed by Understand-Anything, one bullet per layer |
| 3 | Commands | package.json / pyproject.toml / Cargo.toml / go.mod |
| 4 | Non-Obvious Conventions | Cross-layer import anomalies, naming deviators, path/layer disagreements |
| 5 | Safety | Static rules, enforced by .claude/settings.json hooks |
| 6 | Deeper Context | Pointers to docs/agents/ β loaded on demand, not always |
If a section has no graph evidence behind it, it is omitted entirely. No padding. No filler.
These files are never loaded automatically. They exist to be referenced β @docs/agents/architecture.md in a task
prompt, or linked from AGENTS.md Β§6. This keeps the always-on token budget low while still making deep context
available when a task actually needs it.
-
architecture.mdβ project name, one-line summary, detected framework and language, quick-start commands, then every layer mapped with files sorted by inbound-import count. Entry points (zero incoming imports) called out separately. Cross-layer dependency counts show where the architecture has coupling. -
flow.mdorflows/β strictly derived fromdomain-graph.json. Each business flow is listed with its trigger type and entry-point file. Single-file mode (docs/agents/flow.md) when the domain graph has β€8 flows; folder mode (docs/agents/flows/index.md+ one file per domain) when it has more. The plugin auto-migrates between modes on subsequent runs β old artefacts are deleted before the new mode is written. When the domain graph is missing or empty,flow.mdbecomes a one-line stub pointing at/understand-domain(no import-graph fallback β that lives inarchitecture.md). -
patterns.mdβ files the analyser flagged ascomplexwith their function counts. Representative functions per layer withfile:linereferences. The ten most-imported "hub" files whose changes ripple everywhere. -
glossary.mdβ populated from the domain graph's non-heuristic entries. If the domain analysis is purely structural, this file is an honest stub that tells you what to do next rather than emitting noise. -
testing.mdβ derives runner, file layout convention, and single-test command from the build manifest and node path patterns. The one remaining[to fill]lives here: Mock stance, because no graph can tell you whether your team mocks the database. -
tech-debt.mdβ a stub with entry format instructions. Fills over time as real work surfaces real gotchas.
/agent-context [path] [--force] [--dry-run] [--with-ci]
| Flag | What it does |
|---|---|
[path] |
Target repo directory. Defaults to the current working directory. |
--force |
Overwrite existing output files. .claude/settings.json and .gitignore always merge regardless of this flag. |
--dry-run |
Print every file that would be written to stdout. Touch nothing on disk. |
--with-ci |
Also generate .github/workflows/agent-context-freshness.yml and hooks/check-freshness.sh for automated graph staleness checks. |
Sample output
agent-context β summary
Gates:
β knowledge-graph.json present (v1.0.0, analysed 2026-04-23, commit 0c97930)
β domain-graph.json present (quality: mixed)
Files:
β AGENTS.md (84 lines)
β CLAUDE.md (1 line)
β CLAUDE.local.md (3 lines)
β .claude/settings.json (created)
β docs/agents/architecture.md (301 lines)
β docs/agents/flow.md (38 lines)
β docs/agents/patterns.md (156 lines)
β docs/agents/glossary.md (48 lines; 6 entries)
β docs/agents/testing.md (22 lines)
β docs/agents/tech-debt.md (stub, 14 lines)
β .gitignore (appended CLAUDE.local.md)
Cross-vendor:
β .cursor/rules/agents.mdc (synced from AGENTS.md)
β .github/copilot-instructions.md (synced from AGENTS.md)
β .codex/instructions.md (synced from AGENTS.md)
β CONVENTIONS.md (synced from AGENTS.md)
β .aider.conf.yml (created)
Lint (AGENTS.md, 6 checks):
β 6/6 passed
Next:
1. Review AGENTS.md β confirm commands and conventions look right.
2. Hand-curate docs/agents/glossary.md if business terms are missing.
3. Fill Mock stance in docs/agents/testing.md.
When the knowledge graph is stale relative to HEAD, a banner is inserted directly into AGENTS.md:
β knowledge-graph.json was generated against commit 0c97930 but the
repo is at abc1234. Generated files may be out of date.
Re-run /understand for the best results.
The plugin compares project.gitCommitHash in the graph against git rev-parse HEAD. If they differ, the stale banner
appears in AGENTS.md. To regenerate after new commits:
/understand
/agent-context --force
π Evidence over scaffolding β Every line in the output traces back to a graph node, edge, layer, or build manifest entry. If the evidence does not exist, the line does not exist. This is what prevents the "confident but wrong" output that makes auto-generated context files so damaging.
π Hooks, not prose β A sentence in markdown saying "don't run rm -rf" is a suggestion. A deny-list entry in
.claude/settings.json is enforcement. Destructive operations, secret writes, force-pushes, and migration-path edits
are blocked at the hook layer, not requested in prose.
π Three-tier loading β AGENTS.md stays under 100 lines and is loaded on every turn. docs/agents/*.md files are
on-demand β they exist to be referenced, not auto-loaded. This architecture keeps the always-on token budget low without
sacrificing depth.
βοΈ One source of truth β AGENTS.md is canonical. CLAUDE.md is a one-line shim (@AGENTS.md). Cross-vendor
files (.cursor/rules/agents.mdc, .github/copilot-instructions.md, .codex/instructions.md, CONVENTIONS.md) are
all derived from the same rendered AGENTS.md content. One edit, every tool synced.
β
Self-linting output β After writing AGENTS.md, the plugin reads it back and runs 6 mechanical checks: line count
β€100, numbered H2s only, bold taglines, The test: sentences, no ALLCAPS safety keywords, single code fence. Failures
are reported in the summary but never block the run β partial output beats no output.
Every AI coding tool reads a different file. agent-context writes all of them from the same source:
| Tool | File | How it's generated |
|---|---|---|
| Claude Code | AGENTS.md + CLAUDE.md |
Primary output β the kernel |
| Cursor | .cursor/rules/agents.mdc |
AGENTS.md content with .mdc frontmatter |
| GitHub Copilot | .github/copilot-instructions.md |
AGENTS.md verbatim |
| OpenAI Codex | .codex/instructions.md |
AGENTS.md verbatim |
| Aider | CONVENTIONS.md + .aider.conf.yml |
Starter stub / legacy AGENTS.md copy / skip (user choice) |
All vendor files follow the same skip/force/dry-run logic as core files. When you run /agent-context --force, every
vendor file is regenerated from the current graph state.
Existing CONVENTIONS.md? If your repo already has a CONVENTIONS.md (or conventions.md) with team-authored
coding standards, the plugin won't overwrite it. Instead, it reads the content and merges it into AGENTS.md as a
dedicated Β§7 Team Conventions section β which then propagates to every vendor file. Human-authored rules take
priority over graph-derived defaults.
No CONVENTIONS.md yet? On the first run the plugin asks how to handle it via an interactive prompt:
| Option | Behavior |
|---|---|
| Starter stub (default) | Write a minimal CONVENTIONS.md with empty Safety / Naming / Patterns / Workflow sections so the team can fill them in. The next run picks up the populated file and distills it into docs/agents/conventions.md. |
| Skip | No CONVENTIONS.md is created. Other outputs are unaffected. |
| Legacy | Write CONVENTIONS.md as a verbatim copy of AGENTS.md (the previous default β useful when you want Aider to consume the same kernel). |
Under --dry-run or in headless invocations the prompt is skipped and the starter-stub option is used.
Works with any repo that Understand-Anything can analyse. Tested against TypeScript/Nuxt, Python, Go, and Rust projects. The plugin does not read or modify source files β it writes only the files listed in the output tree above.
The complete implementation spec lives at agent-context-plugin/skills/agent-context/references/PLAN.md. It contains
the authoritative graph schemas, the 20-rule AGENTS.md style rubric, per-file content contracts, all rendering
templates, the five convention-mining signals, and the cadence-admin reference sample with expected output.