An AI scanner that reviews code through the eyes of a PM and System Architect:
- It finds deviations of code from system design, tasks, docs.
- Offers to generate code / text for detected discrepancies.
The goal of 1Spur is to prevent tech debt from accumulating, reduce the number of code rewrites, and hit business expectations more accurately:
- finds bugs based on context from Jira/Linear/Slack that other scanners and code review helpers miss
- looks at code through the eyes of the business, reporting errors that are usually only noticed at demo or in production
1Spur Scanner is a proof-of-concept of a Sync Engine for everything your team ships.
uv handles Python, dependencies, and the CLI tool — nothing else to install.
# 1. Install uv (if you don't have it)
curl -LsSf https://astral.sh/uv/install.sh | sh
# 2. Install 1Spur Scanner
uv tool install git+https://github.com/OneSpur/scanner
# 3. Run
1spur-scannerTo update to the latest version:
uv tool install --reinstall git+https://github.com/OneSpur/scannerpip install git+https://github.com/OneSpur/scanner
1spur-scannerdrift-check/
scanner/
main.py # Web server + job orchestration
github.py # GitHub data fetcher
analyzer.py # Drift scoring + AI analysis
report.py # HTML report generator
templates/ # HTML templates
static/ # Static assets
Uses your existing Claude Pro or Max subscription. No API key needed.
How it works: 1Spur Scanner shells out to claude -p (the headless print mode) which reads the OAuth credentials stored by Claude Code on your machine. The subprocess inherits the login automatically.
Requirements:
- Install Claude Code:
npm install -g @anthropic-ai/claude-codeor via the Claude desktop app - Run
claudeand log in at least once (authenticates via browser, stores credentials) - Have an active Claude Pro or Max subscription
That is all. No configuration needed. Select Claude Code in the model selector and submit.
No data leaves your machine.
# 1. Install Ollama: https://ollama.com/download
# 2. Pull a model
ollama pull qwen3.5:9b
# 3. Start Ollama
ollama serveThe local model will appear automatically in the model selector.
Get your key at console.anthropic.com, select Claude API in the model selector and paste it in.
Get your key at platform.openai.com/api-keys, select OpenAI API in the model selector and paste it in.
Without a token you get 60 API requests/hour. Enough for small repos, but large ones will hit the limit.
Create one at github.com/settings/tokens:
| Repo type | Required scope |
|---|---|
| Public | public_repo |
| Private | repo |
| GitHub Projects | addread:project |
1Spur Scanner batches 2 issue-PR pairs per LLM call. Typical token consumption per scan:
| Pairs analysed | LLM calls | Tokens (approx) |
|---|---|---|
| 5 pairs | 4 | ~10,000 |
| 10 pairs | 6 | ~17,000 |
| 20 pairs | 11 | ~32,000 |
| 30 pairs | 16 | ~48,000 |
Each batch call uses ~3,200 tokens (system + 2 issue/diff pairs + output). One summary call adds ~700 tokens.
For Claude Code (Pro/Max), a typical 10-pair scan costs well within the daily usage allowance.
If you'd like to contribute your own example or fix a bug please make sure to take a look at CONTRIBUTING.md

