~95% fewer tokens. Claude reads one 500-token snapshot instead of exploring thousands of files. Instant context, every session. Secret detection, circuit breakers, and token-awareness built in.
Requires Node.js 20+. For the autonomous loop, also install Claude Code and run
gh auth login.
npm install -g codebase-aiThen in your project:
codebaseThat's it. Scans your project, picks a provider/model, and starts Claude Code with full context.
codebase is a vibecoding loop built around three ideas:
- Codebase = brain. One scan writes a compact snapshot (
.codebase.json) — your stack, commands, open issues, recent decisions. AI reads this instead of exploring files. ~95% fewer tokens, instant context. - GitHub = memory. Issues, PRs, and labels are the persistent state. The loop can restart anytime and pick up where it left off.
- Claude = execution. Slash commands give AI a complete workflow: simulate real users, fix bugs, run tests, commit, ship.
Multiple developers can jump into the same loop. Commit .codebase.json and .claude/commands/ — everyone gets the same context and commands.
- Secret detection — scans
.envand config files for leaked AWS keys, GitHub tokens, private keys, and 20+ other credential patterns. Warns without exposing values. - Circuit breaker — stops hammering GitHub API when it's down. Auto-recovers after 60s cooldown. Falls back to cached data.
- Exponential backoff — transient network errors retry automatically with jitter to avoid thundering herd.
- Token budget awareness — auto-slims responses when the manifest is too large for context. Grade your context health with
codebase tokens. - License detection — flags copyleft dependencies that may require source disclosure.
Or run the entire loop hands-free with one command:
/vibeloop
| Command | What it does |
|---|---|
/simulate |
Opens your app in a real browser. Acts like real users. Fixes bugs inline, tracks complex ones as GitHub Issues. |
/build |
Reads open issues, picks the highest priority, implements the fix, tests it, commits, closes the issue. Repeats. |
/launch |
Checks quality gates (open bugs, test suite, UX score). If all pass: bumps version, tags release, merges to main, publishes GitHub Release. |
/vibeloop |
Runs everything. Continuous /simulate → /build → /launch loop. Zero intervention. |
First time? Run /setup in Claude Code to create docs/PRODUCT.md and your first milestone.
Level 1 — Give Claude memory of your project (Node.js only)
cd your-project
codebaseScans your project and wires everything: .codebase.json, CLAUDE.md, MCP server, git hooks, .gitignore.
Level 2 — Autonomous dev loop
npm install -g @anthropic-ai/claude-code
gh auth loginOpen Claude Code in your project, then:
/setup ← run once
/simulate ← find & fix bugs
/build ← clear the backlog
/launch ← ship
Or just:
/vibeloop ← does all of the above, continuously
/vibeloop # full autonomous run: simulate → build → launch
/vibeloop --skip-launch # simulate → build only, stop before release
/vibeloop --dry-run # full run without committing to main or publishing
/vibeloop --max-rounds 5 # cap the build loop at 5 rounds (default: 20)
/vibeloop --sim-count 5 # number of simulated users per cycle (default: 3)
/vibeloop --version 1.2.0 # pin the release version tag
Invoke once. Come back to a shipped, tested, tagged release.
# Launcher (default command)
codebase # detect providers, pick model, start Claude Code
codebase start --provider openrouter --model anthropic/claude-haiku-4-5
# Provider setup
codebase config # show keys + effective env vars
codebase config set openrouter-key sk-or-... # store OpenRouter key
codebase config set zai-key <key> # store z.ai key (GLM models)
codebase config set custom-url https://... # custom OpenAI-compatible endpoint
# Session history
codebase sessions # last 7 days: provider, model, project, duration
# AI interface
codebase brief # full project briefing
codebase brief --slim # lightweight ~20-line brief
codebase next # highest-priority open issue
codebase status # kanban board + milestones
codebase query <path> # e.g. stack.languages or commands.test
# Issues
codebase issue create "title"
codebase issue close <n> --reason "why"
codebase issue comment <n> --message "text"
# Session management
codebase handoff # generate HANDOFF.md for session transfer
codebase tokens # token budget report (A/B/C/D grades)
# Maintenance
codebase scan # refresh .codebase.json
codebase doctor # health check (includes TOKEN HEALTH section)
codebase fix # auto-repair issues found by doctor
codebase setup # re-wire AI tools + install slash commands
codebase mcp # start MCP server (stdio){
"mcpServers": {
"codebase": {
"command": "npx",
"args": ["codebase", "mcp"]
}
}
}Add to .mcp.json in your project root. 18 tools including project_brief (supports slim: true, auto-slims when context is large), get_next_task, get_blockers, create_issue, close_issue, update_issue, get_issue, get_pr, get_plan, update_plan, token_budget, rescan_project, refresh_status, list_commands, list_skills, generate_handoff, get_codebase, query_codebase.
Commit .codebase.json and .claude/commands/. Every teammate with Claude Code gets the same context and slash commands. The loop is resumable — restart anytime, GitHub tracks state.
→ Full feature reference with commands → tools → implementation mapping
Does it send my code to anyone? Scanning and manifest generation runs entirely locally. When you start a session, prompts go to whichever provider you pick: Anthropic directly, OpenRouter, z.ai, or your own custom endpoint. No data goes anywhere until you run Claude commands.
What if I don't use GitHub? Manifest and AI tool wiring work without GitHub. You lose issues, PRs, releases, and labels — core context injection still works.
My project isn't JavaScript — does it work? Yes. 30+ languages, 100+ frameworks detected automatically.
Will the git hooks slow down my commits? No. Scan runs in ~200ms.
What does "autonomous" mean — will it break my code?
All AI commits go to develop. Nothing reaches main until /launch passes quality gates.
What happens when GitHub API goes down? Circuit breaker kicks in after 5 failures. Falls back to cached manifest data. Auto-recovers after 60 seconds. You'll see a warning but the loop keeps running.
Does it scan for leaked secrets?
Yes. codebase scan checks .env files and config files for 20+ credential patterns (AWS keys, GitHub tokens, Stripe keys, private keys, etc.). Findings appear as warnings — values are never written to the manifest.
We welcome contributions! Please read CONTRIBUTING.md for guidelines on how to get started, our commit conventions, and the PR process.
Found a security issue? See SECURITY.md — do not open a public issue.
See CHANGELOG.md for a full version history.
This project follows a Code of Conduct. By participating, you agree to uphold it.
MIT — see LICENSE for details.