Skip to content

jonaskahn/agent-context

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

agent-context

agent-context

Give your AI agent a real map of your codebase β€” not a polite fiction.

Reads Understand-Anything knowledge graphs β†’ emits AGENTS.md, CLAUDE.md, docs/agents/.
Every line traces back to a graph node or edge. Nothing is invented.

Most AI context files are one of two things: empty scaffolds stuffed with [to fill] placeholders, or confident-sounding LLM boilerplate with no connection to actual code. Either way, they actively hurt. A 2025 ETH Zurich study found auto-generated context files reduced task success in 5 of 8 test settings and added up to 4 extra steps per task.

The problem is not the format. It is the evidence.

✨ What makes this different

Typical context tooling agent-context
Generates plausible-sounding filler Emits only what the graph supports
Requires manual editing of dozens of [to fill] slots One [to fill] remains β€” Mock stance in testing.md β€” because that genuinely needs you
Safety rules live as polite markdown suggestions Safety rules are wired as deny-list hooks in .claude/settings.json
One giant file loaded every session Three-tier loading: always-on kernel, path-scoped rules, on-demand depth
Re-run to get the same scaffold again Freshness check β€” warns when the graph is stale vs HEAD

πŸ”„ How it works

 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
 β”‚     Your repo       β”‚     β”‚      Understand-Anything     β”‚     β”‚       agent-context        β”‚
 β”‚                     β”‚     β”‚                              β”‚     β”‚                            β”‚
 β”‚  src/               β”‚     β”‚  /understand                 β”‚     β”‚  /agent-context            β”‚
 β”‚  package.json  ──────────►│  knowledge-graph.json   ──────────►│  AGENTS.md                 β”‚
 β”‚  go.mod             β”‚     β”‚                              β”‚     β”‚  docs/agents/              β”‚
 β”‚  ...                β”‚     β”‚  /understand-domain          β”‚     β”‚  .claude/settings.json     β”‚
 β”‚                     β”‚     β”‚  domain-graph.json      ──────────►│  CLAUDE.md                 β”‚
 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       your code                   real analysis graphs                  context files

Understand-Anything does the heavy lifting: it walks your source, identifies architectural layers, maps import relationships, and builds a dependency-ordered guided tour of the codebase. agent-context is a translator β€” it takes those graphs and renders them into the specific files AI agents read. The two tools are loosely coupled; you can regenerate either independently.

πŸ“ Output at a glance

your-repo/
β”‚
β”œβ”€β”€ AGENTS.md                ◄── πŸ”‘ the kernel  (<100 lines, loaded every session)
β”œβ”€β”€ CLAUDE.md                ◄── πŸ”— one-line shim: @AGENTS.md
β”œβ”€β”€ CLAUDE.local.md          ◄── πŸ”’ gitignored personal prefs
β”œβ”€β”€ CONVENTIONS.md           ◄── πŸ”„ starter stub | legacy AGENTS.md copy | skipped (user choice on first run)
β”‚
β”œβ”€β”€ .claude/
β”‚   └── settings.json        ◄── πŸ›‘οΈ  deny-list hooks (rm -rf, force-push, .env writes...)
β”‚
β”œβ”€β”€ .cursor/rules/
β”‚   └── agents.mdc           ◄── πŸ”„ Cursor rules (synced from AGENTS.md)
β”‚
β”œβ”€β”€ .github/
β”‚   └── copilot-instructions.md  ◄── πŸ”„ GitHub Copilot (synced from AGENTS.md)
β”‚
β”œβ”€β”€ .codex/
β”‚   └── instructions.md      ◄── πŸ”„ OpenAI Codex (synced from AGENTS.md)
β”‚
β”œβ”€β”€ .aider.conf.yml          ◄── πŸ”„ Aider config (points to CONVENTIONS.md)
β”‚
└── docs/agents/             ◄── πŸ“š on-demand depth (loaded only when referenced)
    β”œβ”€β”€ architecture.md           project overview Β· stack Β· quick start Β· layer map Β· guided tour
    β”œβ”€β”€ flow.md                   domain flows Β· entry points Β· triggers (single file when ≀8 flows)
    β”œβ”€β”€ flows/                    folder mode (>8 flows): index.md + one file per domain
    β”‚   β”œβ”€β”€ index.md              domains, flow counts, cross-domain edges
    β”‚   └── <domain>.md           per-domain flows with entry points and triggers
    β”œβ”€β”€ patterns.md               complexity hotspots Β· function exemplars Β· hub imports
    β”œβ”€β”€ glossary.md               domain vocabulary from the domain graph (or stub)
    β”œβ”€β”€ conventions.md            team coding standards (if CONVENTIONS.md exists)
    β”œβ”€β”€ testing.md                runner Β· file layout Β· single-test command
    └── tech-debt.md              known gotchas format (you fill this over time)

With --with-ci, two additional files are generated:

β”œβ”€β”€ .github/workflows/
β”‚   └── agent-context-freshness.yml  ◄── βš™οΈ CI freshness check
└── hooks/
    └── check-freshness.sh           ◄── βš™οΈ pre-commit hook

πŸš€ Getting started

Step 1 β€” Analyse your repo with Understand-Anything

agent-context does not analyse code. Understand-Anything does.

/plugin marketplace add Lum1104/Understand-Anything
/plugin install understand-anything

Then, inside the repo you want to generate context for:

/understand

This produces ./understand-anything/knowledge-graph.json. For a populated glossary and business flow map, also run:

/understand-domain

Step 2 β€” Install agent-context

/plugin marketplace add jonaskahn/agent-context
/plugin install agent-context

Step 3 β€” Generate

/agent-context

All output files are written in one pass.

πŸ”‘ AGENTS.md β€” the always-on kernel

Every AI agent loads AGENTS.md on every turn. It is deliberately kept under 100 lines, because every irrelevant token in a shared attention window degrades output quality across all frontier models. When the kernel is tight and accurate, agents stop asking "where does X live?" and start making correct edits on the first try.

Each section is derived directly from graph data β€” never invented:

# Section Source
1 Architectural Altitude Tour steps β€” the dependency-ordered walkthrough of the codebase
2 Module Map Layers β€” pre-computed by Understand-Anything, one bullet per layer
3 Commands package.json / pyproject.toml / Cargo.toml / go.mod
4 Non-Obvious Conventions Cross-layer import anomalies, naming deviators, path/layer disagreements
5 Safety Static rules, enforced by .claude/settings.json hooks
6 Deeper Context Pointers to docs/agents/ β€” loaded on demand, not always

If a section has no graph evidence behind it, it is omitted entirely. No padding. No filler.

πŸ“š docs/agents/ β€” on-demand depth

These files are never loaded automatically. They exist to be referenced β€” @docs/agents/architecture.md in a task prompt, or linked from AGENTS.md Β§6. This keeps the always-on token budget low while still making deep context available when a task actually needs it.

  • architecture.md β€” project name, one-line summary, detected framework and language, quick-start commands, then every layer mapped with files sorted by inbound-import count. Entry points (zero incoming imports) called out separately. Cross-layer dependency counts show where the architecture has coupling.

  • flow.md or flows/ β€” strictly derived from domain-graph.json. Each business flow is listed with its trigger type and entry-point file. Single-file mode (docs/agents/flow.md) when the domain graph has ≀8 flows; folder mode (docs/agents/flows/index.md + one file per domain) when it has more. The plugin auto-migrates between modes on subsequent runs β€” old artefacts are deleted before the new mode is written. When the domain graph is missing or empty, flow.md becomes a one-line stub pointing at /understand-domain (no import-graph fallback β€” that lives in architecture.md).

  • patterns.md β€” files the analyser flagged as complex with their function counts. Representative functions per layer with file:line references. The ten most-imported "hub" files whose changes ripple everywhere.

  • glossary.md β€” populated from the domain graph's non-heuristic entries. If the domain analysis is purely structural, this file is an honest stub that tells you what to do next rather than emitting noise.

  • testing.md β€” derives runner, file layout convention, and single-test command from the build manifest and node path patterns. The one remaining [to fill] lives here: Mock stance, because no graph can tell you whether your team mocks the database.

  • tech-debt.md β€” a stub with entry format instructions. Fills over time as real work surfaces real gotchas.

βš™οΈ Command reference

/agent-context [path] [--force] [--dry-run] [--with-ci]
Flag What it does
[path] Target repo directory. Defaults to the current working directory.
--force Overwrite existing output files. .claude/settings.json and .gitignore always merge regardless of this flag.
--dry-run Print every file that would be written to stdout. Touch nothing on disk.
--with-ci Also generate .github/workflows/agent-context-freshness.yml and hooks/check-freshness.sh for automated graph staleness checks.
Sample output
agent-context β€” summary

Gates:
  βœ“ knowledge-graph.json present (v1.0.0, analysed 2026-04-23, commit 0c97930)
  βœ“ domain-graph.json present (quality: mixed)

Files:
  βœ“ AGENTS.md                      (84 lines)
  βœ“ CLAUDE.md                      (1 line)
  βœ“ CLAUDE.local.md                (3 lines)
  βœ“ .claude/settings.json          (created)
  βœ“ docs/agents/architecture.md     (301 lines)
  βœ“ docs/agents/flow.md             (38 lines)
  βœ“ docs/agents/patterns.md         (156 lines)
  βœ“ docs/agents/glossary.md         (48 lines; 6 entries)
  βœ“ docs/agents/testing.md          (22 lines)
  βœ“ docs/agents/tech-debt.md        (stub, 14 lines)
  βœ“ .gitignore                     (appended CLAUDE.local.md)

Cross-vendor:
  βœ“ .cursor/rules/agents.mdc       (synced from AGENTS.md)
  βœ“ .github/copilot-instructions.md (synced from AGENTS.md)
  βœ“ .codex/instructions.md          (synced from AGENTS.md)
  βœ“ CONVENTIONS.md                   (synced from AGENTS.md)
  βœ“ .aider.conf.yml                 (created)

Lint (AGENTS.md, 6 checks):
  βœ“ 6/6 passed

Next:
  1. Review AGENTS.md β€” confirm commands and conventions look right.
  2. Hand-curate docs/agents/glossary.md if business terms are missing.
  3. Fill Mock stance in docs/agents/testing.md.

When the knowledge graph is stale relative to HEAD, a banner is inserted directly into AGENTS.md:

⚠ knowledge-graph.json was generated against commit 0c97930 but the
  repo is at abc1234. Generated files may be out of date.
  Re-run /understand for the best results.

πŸ” Keeping it fresh

The plugin compares project.gitCommitHash in the graph against git rev-parse HEAD. If they differ, the stale banner appears in AGENTS.md. To regenerate after new commits:

/understand
/agent-context --force

πŸ—οΈ Design decisions

πŸ” Evidence over scaffolding β€” Every line in the output traces back to a graph node, edge, layer, or build manifest entry. If the evidence does not exist, the line does not exist. This is what prevents the "confident but wrong" output that makes auto-generated context files so damaging.

πŸ”’ Hooks, not prose β€” A sentence in markdown saying "don't run rm -rf" is a suggestion. A deny-list entry in .claude/settings.json is enforcement. Destructive operations, secret writes, force-pushes, and migration-path edits are blocked at the hook layer, not requested in prose.

πŸ“ Three-tier loading β€” AGENTS.md stays under 100 lines and is loaded on every turn. docs/agents/*.md files are on-demand β€” they exist to be referenced, not auto-loaded. This architecture keeps the always-on token budget low without sacrificing depth.

☝️ One source of truth β€” AGENTS.md is canonical. CLAUDE.md is a one-line shim (@AGENTS.md). Cross-vendor files (.cursor/rules/agents.mdc, .github/copilot-instructions.md, .codex/instructions.md, CONVENTIONS.md) are all derived from the same rendered AGENTS.md content. One edit, every tool synced.

βœ… Self-linting output β€” After writing AGENTS.md, the plugin reads it back and runs 6 mechanical checks: line count ≀100, numbered H2s only, bold taglines, The test: sentences, no ALLCAPS safety keywords, single code fence. Failures are reported in the summary but never block the run β€” partial output beats no output.

πŸ”„ Cross-vendor support

Every AI coding tool reads a different file. agent-context writes all of them from the same source:

Tool File How it's generated
Claude Code AGENTS.md + CLAUDE.md Primary output β€” the kernel
Cursor .cursor/rules/agents.mdc AGENTS.md content with .mdc frontmatter
GitHub Copilot .github/copilot-instructions.md AGENTS.md verbatim
OpenAI Codex .codex/instructions.md AGENTS.md verbatim
Aider CONVENTIONS.md + .aider.conf.yml Starter stub / legacy AGENTS.md copy / skip (user choice)

All vendor files follow the same skip/force/dry-run logic as core files. When you run /agent-context --force, every vendor file is regenerated from the current graph state.

Existing CONVENTIONS.md? If your repo already has a CONVENTIONS.md (or conventions.md) with team-authored coding standards, the plugin won't overwrite it. Instead, it reads the content and merges it into AGENTS.md as a dedicated Β§7 Team Conventions section β€” which then propagates to every vendor file. Human-authored rules take priority over graph-derived defaults.

No CONVENTIONS.md yet? On the first run the plugin asks how to handle it via an interactive prompt:

Option Behavior
Starter stub (default) Write a minimal CONVENTIONS.md with empty Safety / Naming / Patterns / Workflow sections so the team can fill them in. The next run picks up the populated file and distills it into docs/agents/conventions.md.
Skip No CONVENTIONS.md is created. Other outputs are unaffected.
Legacy Write CONVENTIONS.md as a verbatim copy of AGENTS.md (the previous default β€” useful when you want Aider to consume the same kernel).

Under --dry-run or in headless invocations the prompt is skipped and the starter-stub option is used.

πŸ”§ Compatibility

Works with any repo that Understand-Anything can analyse. Tested against TypeScript/Nuxt, Python, Go, and Rust projects. The plugin does not read or modify source files β€” it writes only the files listed in the output tree above.

πŸ“– Specification

The complete implementation spec lives at agent-context-plugin/skills/agent-context/references/PLAN.md. It contains the authoritative graph schemas, the 20-rule AGENTS.md style rubric, per-file content contracts, all rendering templates, the five convention-mining signals, and the cadence-admin reference sample with expected output.

About

πŸš€ Evidence-driven context engineering for AI agents coding

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Contributors