Audit and score your AGENTS.md, CLAUDE.md, .cursorrules, or any agent context file against a research-backed rubric.
Scores your agent context file across 16 categories grouped into 3 pillars:
- 🔧 Functional (35%): Build commands, implementation details, architecture, code style, testing, dependencies
- 🛡️ Safety (40%): Security, performance, error handling, environment
- 📋 Meta (25%): Documentation, communication, workflow, constraints, examples, versioning
Safety weighs the most because research found that only 14.5% of 2,303 context files specify security or performance rules.
Visit robobobby.github.io/agentlint and paste your file. Everything runs client-side, nothing leaves your browser.
You can also fetch directly from a GitHub URL (paste any repo's blob URL and it converts to raw automatically).
# Score a file
python3 agentlint.py AGENTS.md
# JSON output
python3 agentlint.py CLAUDE.md --json --pretty
# HTML report
python3 agentlint.py .cursorrules --html > report.html
# From stdin
cat AGENTS.md | python3 agentlint.py -pip install agentlint
agentlint AGENTS.mdEach category gets 0-10 based on:
- Pattern matching: Relevant keywords, commands, and structural markers
- Signal density: More instances raise confidence (with diminishing returns via log2)
- Header bonus: Dedicated sections score higher than scattered mentions
Overall grade: A+ through F, derived from weighted pillar averages.
| # | Category | Pillar | Study Prevalence |
|---|---|---|---|
| 1 | Build & Run Commands | Functional | 62.3% |
| 2 | Implementation Details | Functional | 69.9% |
| 3 | Architecture | Functional | 67.7% |
| 4 | Code Style | Functional | ~55% |
| 5 | Testing | Functional | ~50% |
| 6 | Dependencies | Functional | ~40% |
| 7 | Security | Safety | 14.5% |
| 8 | Performance | Safety | 14.5% |
| 9 | Error Handling | Safety | ~25% |
| 10 | Environment | Safety | ~35% |
| 11 | Documentation | Meta | ~45% |
| 12 | Communication | Meta | ~40% |
| 13 | Workflow | Meta | ~50% |
| 14 | Constraints & Boundaries | Meta | ~30% |
| 15 | Examples | Meta | ~35% |
| 16 | Versioning & Maintenance | Meta | ~20% |
After auditing, grab a badge for your README:
[-brightgreen)](https://github.com/robobobby/agentlint)The web UI generates the badge markdown automatically.
The rubric is derived from "Agent READMEs: An Empirical Study of Context Files for Agentic Coding" (2025), which analyzed 2,303 context files from 1,925 repositories across Claude Code, OpenAI Codex, and GitHub Copilot.
- CLI: Pure Python 3.10+ stdlib. No pip install needed.
- Web: Single HTML file. No build step. No framework. No tracking.
| Tool | What it tests | Link |
|---|---|---|
| AgentLint | Agent configuration files | Config quality |
| AgentEval | Agent conversation behavior | robobobby.github.io/agenteval |
AgentLint checks your agent's configuration. AgentEval checks your agent's behavior. Use both.
MIT