Score any URL for AI-agent readiness — llms.txt, JSON-LD, AI-bot robots.txt, canonical, MCP, meta, sitemap. One command, one number, no telemetry.
Audits a URL for how well it talks to ChatGPT, Claude, Perplexity, and other AI agents — and gives you a single 0-100 score with a per-section breakdown.
$ agent-ready https://example.com
✓ llms.txt 10/15 present, 4.2 KB, 12 URLs
✓ json-ld 23/25 3 block(s), types: Article, Organization, BreadcrumbList
✗ ai-bots-robots.txt 0/20 ClaudeBot, GPTBot disallowed at root
✓ canonical+hreflang 12/15 canonical=set, hreflang langs=['en','ru']
✗ mcp-card 0/10 no /.well-known/mcp.json (optional)
✓ meta 10/10 10/10 of common signals
✓ sitemap 5/5 valid, 1250 URLs
Score: 60 / 100
Tier: C (middling — focus on ai-bots-robots.txt, mcp-card)
Full report: agent-ready --full https://example.com
Remediation: https://guardlabs.online/whiteglove/ (paid, $99-2499)
Every blog post about "AI SEO" tells you to "add llms.txt and JSON-LD." Nobody hands you a CLI that opens your site and tells you what's actually missing. This is that CLI.
It is intentionally:
- Single file, ~500 LoC. Read it. Audit the audit.
- No telemetry. It hits your URL only. No phone-home.
- Deterministic. Same site → same score (modulo the site changing).
- Transparent scoring. Every weight is in
agent_ready/cli.py. Disagree? Open an issue or fork.
pip install agent-readiness-cliAvailable on PyPI.
Or run from source (no install):
git clone https://github.com/sspoisk/agent-readiness-cli
cd agent-readiness-cli
python3 -m agent_ready.cli https://your-site.exampleRequires Python 3.10+. Standard library only — no third-party deps.
agent-ready https://example.com # human summary (default)
agent-ready --full https://example.com # human summary + every finding
agent-ready --json https://example.com # machine-readable JSON
agent-ready --csv https://example.com # one CSV row (for monitoring)
agent-ready --quiet https://example.com # just the integer scoreExit codes:
0— audit ran (regardless of score)2— could not fetch (DNS, timeout, TLS, 4xx/5xx on the URL itself)- with
--quiet— exit code is the band index: A=0, B=1, C=2, D=3, F=4
| Section | Weight | What |
|---|---|---|
llms.txt |
15 | presence, valid format (leading H1), at least 3 canonical URLs listed |
json-ld |
25 | parseable, recognised @type from a curated list, at least two distinct types |
ai-bots-robots.txt |
20 | rules for GPTBot / ClaudeBot / Claude-Web / PerplexityBot / Google-Extended / CCBot / Applebot-Extended / Bytespider |
canonical+hreflang |
15 | self-canonical present, hreflang reciprocity, x-default for multi-lang |
mcp-card |
10 | optional — /.well-known/mcp.json is valid JSON with name, description, endpoint |
meta |
10 | description, og:title, og:description, twitter:card, <html lang=> |
sitemap |
5 | /sitemap.xml exists, valid <urlset> or <sitemapindex>, ≥5 URLs |
| Total | 100 | A ≥ 90 · B ≥ 75 · C ≥ 55 · D ≥ 35 · F < 35 |
Full scoring math is in agent_ready/cli.py. One file, no ceremony.
Drop it into a workflow to track your score over time:
- name: Audit AI-agent readiness
run: |
pip install agent-readiness-cli
agent-ready --csv https://your-site.example >> readiness.csv
agent-ready --quiet https://your-site.exampleIf you want the build to fail below a threshold, gate on the score:
SCORE=$(agent-ready --quiet https://your-site.example)
[ "$SCORE" -ge 75 ] || { echo "AI-readiness below 75"; exit 1; }- Crawl the whole site (it audits one URL — the homepage by default)
- Fix anything for you (it tells you what to fix)
- Check vulnerabilities (use OWASP ZAP for that)
- Validate JSON-LD against full Schema.org grammar (it checks that types are recognised)
- Score Core Web Vitals or accessibility (different concerns)
If you need any of those, this isn't the right tool.
- firecrawl/llmstxt-generator — generates an
llms.txtfor you. We audit yours; we don't generate. - langchain-ai/mcpdoc — exposes llms-txt to IDEs as MCP. Different audience (developers wanting LLM context).
- Google Rich Results Test — validates JSON-LD for Google specifically. Web UI only, no CLI.
- NSHipster/sosumi.ai — Apple-docs to AI-readable, narrow scope.
agent-readiness-cli is the gap: a single CLI that audits the agent-readiness surface and gives you a number.
If your score is low and you don't want to fix it yourself:
- DIY — read the report, follow the linked specs (we cite them in
--fulloutput). - Self-service audit — GuardLabs Web-Audit Guardian from $99 runs continuously every 30 min, watches multi-language drift, security headers, and structure.
- Hands-on white-glove audit — GuardLabs White-Glove Web Audit · $2,499 — async-only, no calls. Custom report + 30-day async support + quarterly re-audit. We are the engineers behind this CLI.
This CLI is free and MIT-licensed forever, regardless of whether you ever buy anything.
Bug reports and PRs welcome. The repo is one Python file plus tests; barriers to contribution are low. See CONTRIBUTING.md for details.
If you want to add or re-weight a check, propose the rationale in an issue first — we want every weight to be defensible.
MIT. See LICENSE.
Maintained by GuardLabs. The CLI is an open-source byproduct of running Web-Audit Guardian on real sites — multi-language e-commerce, agency client portfolios, AI-native SaaS. If your readiness matters and you want serious eyes on it, White-Glove is where we put them.