A tiny Jira clone for your repo.
Kanbus is a spiritual successor to Beads, inspired by its elegant, domain-specific approach to project management. We are deeply grateful to the Beads author and community for proving that a dedicated cognitive framework for tasks is game-changing.
Kanbus builds on this foundation by adapting the model to be a thinner, more native layer over Git—optimizing for AI agents and distributed teams:
- A Thinner Layer over Git: We removed the secondary SQLite index. The complexity of maintaining and synchronizing a shadow database isn't worth the operational cost. Kanbus reads files directly.
- Better Storage Alignment: Things like "exclusively claiming" a task don't align well with the distributed Git model. We removed them to ensure the tool behaves exactly like the version control system underneath it.
- Conflict-Free Storage: Instead of a single JSON-L file (which guarantees merge conflicts when agents work in parallel), Kanbus stores separate tasks in separate files. This eliminates conflicts and allows deep linking to specific issues from GitHub.
- Streamlined Cognitive Model: Beads is powerful but complex, with 130+ attributes per issue. We streamlined this to a focused core (Status, Priority, Dependencies) to reduce the "context pollution" for AI agents. We want the model to think about the work, not how to use the tracker. The goal is a helpful cognitive model that unburdens your mental state rather than adding to it.
- AI-Native Nomenclature: Instead of teaching models new terms like "beads", we use the standard Jira vocabulary (Epics, Tasks, Sub-tasks) that AI models are already extensively pre-trained on. This leverages their existing knowledge graph for better reasoning.
- Git-Native Scoping: We replaced complex "contributor roles" with standard Git patterns. Want local tasks? Just
.gitignorea folder. Working in a monorepo? Kanbus respects your current directory scope automatically.
Kanbus is designed to remove friction, not add it.
- No Syncing: There is no secondary database to synchronize. The files on disk are the source of truth. You will never be blocked from pushing code because a background daemon is out of sync.
- Git Hooks Help You: Git hooks should assist your workflow, not interrupt it. Kanbus hooks are designed to be invisible helpers, ensuring data integrity without stopping you from getting work done.
For a detailed comparison, see Kanbus vs. Beads.
Offload your mental context. Instead of keeping 15 different chat sessions and open loops in your head, tell your agent to "record the current state" into Kanbus. It's a permanent, searchable memory bank for your AI workforce.
- No SQL Server: We removed the SQLite daemon entirely. Each command reads the JSON files directly, so there is nothing to synchronize or keep running.
- No JSONL Merge Conflicts: There is no monolithic JSONL file. Every issue has its own JSON document, which eliminates merge conflicts when teams (or agents) edit work in parallel.
- No Daemon: There is no background process to crash or manage.
- No API: Your agents read and write files directly (or use the simple CLI).
Unlike other file-based systems that use a single JSONL file (guaranteeing merge conflicts), Kanbus stores one issue per file. This allows multiple agents and developers to work in parallel without blocking each other.
Kanbus includes a Wiki Engine that renders Markdown templates with live issue data. Your planning documents always reflect the real-time state of the project, giving agents the "forest view" they often lack.
There are no per-seat licenses or hosted fees. If you have a git repository, you already have the database—and that keeps Kanbus affordable for very large teams (or fleets of agents).
This repository contains the complete vision, implementation plan, and task breakdown for building Kanbus. We are building it in public, using Kanbus to track itself.
# Initialize a new project
kanbus init
# Create an issue
kanbus create "Implement the login flow"
# List open tasks
kanbus list --status todo
# Show details
kanbus show kanbus-a1bThe console UI is served by the Rust local server and expects tenant-aware URLs.
Build the console assets once:
cd console
npm install
npm run buildRun the local backend:
cargo run --bin console_local --manifest-path rust/Cargo.tomlOpen:
http://127.0.0.1:5174/<account>/<project>/
By default, the server treats the repo root as the data root and serves assets from console/dist. Optional env vars:
CONSOLE_PORT(default5174)CONSOLE_ROOT(sets both data root and assets root)CONSOLE_DATA_ROOT(data root override)CONSOLE_ASSETS_ROOT(assets root override)CONSOLE_TENANT_MODE=multi(enable/account/projectmapping under data root)
Kanbus uses a just-in-time index daemon for read-heavy commands such as kanbus list. The CLI auto-starts the daemon when needed, reuses a healthy socket, and removes stale sockets before restarting.
To disable daemon mode for a command:
KANBUS_NO_DAEMON=1 kanbus listOperational commands:
kanbus daemon-status
kanbus daemon-stopWe provide two implementations driven by the same behavior specification:
Choose Python if:
- You want easy
pip installwith no compilation - You are scripting custom agent workflows
Choose Rust if:
- You need maximum performance (sub-millisecond queries)
- You have a massive repository (> 2000 issues)
Kanbus keeps Python and Rust in lockstep: both CLIs run the same Gherkin specs, share identical JSON serialization, and target the same operational model. The duality is intentional—pick the runtime that fits your packaging or performance needs without changing workflows.
Storage is single-path and conflict-resistant: every issue lives in its own JSON file under project/issues/, with hierarchy and workflow rules in kanbus.yml. There is no secondary SQLite cache or fallback location to reconcile, which removes whole classes of sync defects and keeps the mental model aligned with Git.
We benchmarked real data from the Beads project (836 issues) to measure end-to-end “list all beads” latency, including process startup. Scenarios: Beads (Go, SQLite + JSONL), Kanbus Python/Rust reading the Beads JSONL (--beads), and Kanbus Python/Rust reading project JSON files. Five runs each with caches cleared between runs.
Key takeaway: direct JSON reads are fast enough that a SQLite sidecar solves a problem we do not have. Removing it simplifies operations, eliminates sync fragility, and keeps deployments portable.
Warm-start median response times (ms): Go 5277.6; Python — Beads JSONL 538.7; Rust — Beads JSONL 9.9; Python — Project JSON 653.5; Rust — Project JSON 54.6.
Cold/Warm medians (ms, cold over warm shown as stacked bars): Go 197.6/5277.6; Python — Beads 566.1/538.7; Rust — Beads 11.9/9.9; Python — JSON 648.3/653.5; Rust — JSON 92.4/54.6. Warm runs reuse the resident daemon for Kanbus; cold runs force KANBUS_NO_DAEMON=1 and clear caches each iteration. Go/Beads warm path spikes because its SQLite daemon import dominates the second run.
Kanbus/
|-- planning/
| |-- VISION.md # Complete specification
| `-- IMPLEMENTATION_PLAN.md # Detailed technical plan
|-- specs/ # Shared Gherkin feature files
|-- python/ # Python implementation
|-- rust/ # Rust implementation
|-- apps/ # Public website (Gatsby)
`-- .beads/ # Project task database
We welcome contributions! Please:
- Pick a task from
kanbus ready. - Follow the BDD workflow in AGENTS.md.
- Ensure all quality gates pass.
Run the full quality gates:
make check-allRun only Python checks:
make check-pythonRun only Rust checks:
make check-rustRun index build and cache load benchmarks:
python tools/benchmark_index.py
cd rust && cargo run --release --bin index_benchmarkMIT

