AI-native codebase assessment for Claude Code.
ARES installs a /ares skill into Claude Code, then gives Claude a structured
playbook for reviewing a repository the way an experienced agentic-readiness
reviewer would. The main product is not a lint pass. It is a rubric-driven,
evidence-backed repo assessment that ends with:
- a short summary inside Claude Code
- a full markdown report written into the repository
ARES also keeps the original deterministic local scanner as a secondary tool for teams that want an offline structural pass or machine-readable baseline output.
npm install -g ares-scanIf the npm package is not yet published or you want the latest GitHub version:
npm install -g github:weeargh/aresThe npm install automatically installs a personal Claude Code skill at:
~/.claude/skills/aresIf the skill already exists, install will leave your current local copy in place. To reinstall or refresh it explicitly:
ares install-skillOnce installed, a user can open Claude Code in any repository and run:
/ares
That triggers the bundled ARES skill from ~/.claude/skills/ares, produces a
short in-chat assessment summary, and writes a full markdown report into the
repo.
The Claude Code /ares skill is hardened to stay read-first:
- it does not run repo package scripts, task runners, builds, or tests
- it does not silently overwrite an existing installed skill on npm install
- it excludes common secret-bearing files such as
.env*,.npmrc, private keys, and credential-like files from model-visible evidence by default
Open Claude Code in a repository and run:
/ares
/ares docs/agentic-readiness.mdWhat /ares does:
- inspects the current repository with Claude Code tools
- reads the important files and configs
- scores the repo against the ARES rubric
- explains strengths, weaknesses, and likely agent failure modes
- writes a full local markdown report
Default report path: ares-report.md
The in-chat summary should include the saved report as a clickable file link,
for example [ares-report.md](/absolute/path/to/repo/ares-report.md).
/ares should also identify the installed ARES version in the chat summary and
in the written report. If the installed skill is behind the latest published
release, it should prompt the user to update first.
ARES is designed around judgment, not just file detection.
The bundled /ares skill tells Claude to:
- generate a compact repository snapshot
- inspect the highest-signal files
- evaluate the repo against the ARES rubric
- produce a concise in-chat verdict
- write a full markdown report into the repo
The rubric asks questions like:
- Can an AI coding agent understand this repo quickly?
- Can it discover how to run, test, and change the code safely?
- Are the boundaries, workflows, and instructions explicit enough?
- How likely is Claude Code to succeed here without constant human rescue?
The assessment uses evidence from the actual repository and is expected to cite real files in the report.
The shell CLI remains available if you want a local structural scan alongside the AI-native review.
ares scan <path> # Scan and print terminal output
ares <path> --md # Alias: save markdown report to ares-report.md
ares <path> --json # Save JSON report to ares-report.json
ares <path> --out report.md # Save to a specific file
ares <path> --type service # Override repo type detection
ares <path> --category MRC,TEST # Run selected categories only
ares <path> --quiet # Suppress terminal output
ares <path> --llm # Optional: author scanner markdown with your own LLM commandExamples:
ares scan .
ares . --md
ares . --json --out ares-report.jsonARES scores 10 categories:
MRC: Context & IntentNAV: Navigability & DiscoverabilityTSC: Contracts & ExplicitnessTEST: Validation InfrastructureENV: Local OperabilityMOD: Change Boundaries & ModularityCON: Conventions & Example DensityERR: Diagnostics & RecoverabilityCICD: Automated Feedback LoopsAGT: Agent Guidance & Guardrails
The Claude Code skill uses these categories as judgment prompts. The deterministic scanner uses the scoring rules in docs/rubric.md.
Inside Claude Code:
- overall score and rating
- strongest areas
- biggest risks
- first fixes to make
Written to the repo:
- executive summary
- category-by-category scorecard
- strengths and gaps
- likely agent failure modes
- prioritized fixes
- safe starting commands for future agents
- scans locally without network access
- walks tracked and untracked repo files
- classifies files by role
- detects common language and tooling markers
- applies heuristic analyzers per category
- emits terminal, Markdown, or JSON output
- reports package summaries for monorepos
- It does not guarantee task success.
- It does not replace code review, security review, or runtime validation.
- The scanner does not run the application.
- The
/aresskill should only claim what it can support with repo evidence.
The deterministic scanner computes its score without an LLM.
--llm only changes how the scanner markdown report is written. It expects a
command that reads a prompt from stdin and writes Markdown to stdout.
Example:
ARES_LLM_COMMAND="your-llm-command" ares . --llmimport { generateMarkdown, installClaudeSkill, scan } from "ares-scan";
installClaudeSkill();
const result = scan("/path/to/repo");
console.log(result.overallScore);
console.log(result.rating);
const markdown = generateMarkdown(result);npm install
npm run lint
npm test
npm run smoke
npm run checkThis repo is set up to publish ares-scan from GitHub Actions.
Typical release flow:
npm run check
npm version patch --no-git-tag-version
git push origin mainThe publish workflow on main will:
- run lint, tests, and smoke checks
- create a git tag and GitHub release
- publish the new npm version if that version is not already on npm
MIT