How much of the code on GitHub is actually written by AI?
GitHub AI Radar scans any GitHub repository and tells you exactly how much of its development is AI-generated. It analyzes Commits and Pull Requests through a three-layer detection pipeline — bot identity filtering, known AI bot matching, and LLM-powered text style auditing — producing an AI Involvement Index (AII) score from 0% to 100%.
English | 中文
AI coding assistants like Copilot, Cursor, and Codex are reshaping open-source development at an unprecedented pace. But how deep does the AI involvement really go?
GitHub AI Radar gives you the answer — with data, not guesswork.
- Track AI adoption across popular open-source projects
- Daily automated reports published to GitHub Pages
- Share rankings as beautiful images with QR codes
- Zero-config deployment — one GitHub Action, fully automated
GitHub Events ──→ L1: System Bot Filter ──→ L2: AI Bot Match ──→ L3: LLM Audit ──→ AII Score
(dependabot, etc.) (copilot[bot], etc.) (text analysis)
- L1 — Filters out system bots (CI/CD, dependabot) by username
- L2 — Identifies known AI coding assistants (Copilot, Codex) by username
- L3 — Explicit pattern detection (PR AI collaboration mentions, commit Git trailers like
Assisted-by) + LLM text style audit - AII — Dynamically weighted: only dimensions with actual data receive weight (e.g. a repo with no PRs assigns 100% weight to commits)
- 🔍 Three-layer detection — Static rules + explicit AI pattern matching + LLM text audit
- 📊 Beautiful report site — Podium-style rankings, trend charts, sparklines, GitHub avatars, and shareable images
- ⚡ Batch LLM scoring — 10 events per API call, with retry and concurrency for large repos
- 🔌 Multiple LLM backends — OpenAI, GitHub Models, or any OpenAI-compatible endpoint
- 📦 Event-level cache — Skips LLM calls for unchanged events across runs
- 🛠️ Flexible CLI — Single-item analysis, batch reports, and
--forcefull re-scoring - 📱 Mobile responsive — Adaptive layout for desktop and mobile devices
- 📸 Share as image — One-click ranking snapshot with QR code, perfect for social media
pip install -r requirements.txtCopy and edit the environment file:
cp .env.example .env| Variable | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Recommended | GitHub PAT — without it, API is limited to 60 req/h |
LLM_PROVIDER |
Optional | LLM backend: none, openai, or github (default: config.toml) |
LLM_MODEL |
Optional | Model name, e.g. gpt-4.1 (default: config.toml) |
OPENAI_API_KEY |
Optional | Required when using the OpenAI provider |
OPENAI_BASE_URL |
Optional | OpenAI-compatible endpoint (Azure, GitHub Models, etc.) |
You can also configure repos, LLM settings, and bot lists in config.toml.
Analyze a single PR / Commit:
python analyze.py https://github.com/owner/repo/pull/42
python analyze.py owner/repo#123
python analyze.py --no-llm <URL> # skip LLM, use static rules onlyGenerate a batch report:
# Analyze repos from config.toml → JSON (with event cache)
python -m report.cli --out reports
# Force re-score all events (ignore cache)
python -m report.cli --force
# Render JSON → static HTML site
python -m report.html --input reports --out site
# Preview locally (open http://localhost:8000)
python -m http.server 8000 -d siteGenerate mock data for local development:
python scripts/mock_reports.py # 7 days of mock reports → reports/
python -m report.html --input reports --out site
python -m http.server 8000 -d site # preview at http://localhost:8000Automatically analyze your repos daily and publish to GitHub Pages — zero maintenance:
- Fork this repo
- Add secrets in Settings → Secrets:
GH_PATand optionallyOPENAI_API_KEY - Set Settings → Pages → Source to Deploy from a branch →
gh-pages// (root) - Edit
config.tomlto add the repositories you want to track - Trigger manually or wait for the daily schedule
Reports will be available at https://<user>.github.io/<repo>/
MIT