Team-level AI orchestrator for engineering teams, powered by Copilot SDK — a daemon you run on your machine, controlled entirely from a web UI in your browser.
Chapterhouse is built on Max by Burke Holland. The original Max codebase provided the foundation for this project.
- Always running — persistent local daemon with a browser frontend at
http://localhost:7788. - Builds team memory without leaking personal context — every Chapterhouse instance keeps a private local wiki at
~/.chapterhouse/wiki/, while shared team pages live on the authoritative Team Chapterhouse deployment and sync down read-only to personal instances. That gives each engineer a place for personal notes and conversation history without turning OKRs, KPIs, and team docs into conflicting local forks. - Codes while you're away — spins up real Copilot CLI worker sessions in any directory and reports back when they're done. The Workers tab in the web UI shows what's running.
- Learns any skill — pulls from skills.sh or builds new skills on demand.
- Your Copilot subscription — works with any model your subscription includes (Claude, GPT, Gemini, …). Auto-routing picks a tier per message.
See CHANGELOG.md for recent changes and feature history.
Requires Node.js 24 or later (npm ≥ 11.5.1 for Trusted Publishing support).
Install globally via npm:
npm install -g chapterhouse@latestAfter installing, run first-time setup:
chapterhouse setupSetup walks you through:
- GitHub token — Chapterhouse uses the Copilot CLI under the hood. Run
copilot login(orgh auth login --scopes copilot) to authenticate before starting. - Config file — Setup writes
~/.chapterhouse/.env. You can edit it at any time. Key variables:
# Required
COPILOT_MODEL=claude-sonnet-4-5 # default model; overridable per session
# Optional — API token auth (simpler, single-user)
API_TOKEN=your-secret-token
# Optional — Microsoft Entra ID auth (team deployments)
ENTRA_AUTH_ENABLED=true
ENTRA_TENANT_ID=your-tenant-id
ENTRA_CLIENT_ID=your-client-id
# Optional — Azure DevOps OKR integration
ADO_ORG=https://dev.azure.com/your-org
ADO_PROJECT=your-project
ADO_PAT=your-ado-pat-here
# Optional — logging
LOG_LEVEL=info # trace | debug | info | warn | error | fatal | silent (default: info)
# Set to "debug" to see chat message content and routing decisions.
# Logs are structured JSON (Pino). For human-readable output:
# chapterhouse start 2>&1 | npx pino-prettyThen start the daemon:
chapterhouse startOpen http://localhost:7788 in your browser.
irm https://raw.githubusercontent.com/bketelsen/chapterhouse/main/install.ps1 | iexcurl -fsSL https://raw.githubusercontent.com/bketelsen/chapterhouse/main/install.sh | bashOr clone and build manually:
git clone https://github.com/bketelsen/chapterhouse.git ~/.chapterhouse/src
cd ~/.chapterhouse/src
npm install && npm run build && npm linkchapterhouse updatechapterhouse update is npm-registry-aware. It detects how Chapterhouse was installed and routes accordingly:
| Install source | Update action |
|---|---|
npm global (npm install -g chapterhouse) |
Runs npm install -g chapterhouse@latest |
git/legacy (~/.chapterhouse/src) |
Git pull + rebuild (with deprecation notice) |
| dev (source working tree) | No-op — use git pull manually |
| Flag | Description |
|---|---|
--check-only |
Print current/latest version and exit without updating |
--ref <version> |
Install a specific version, e.g. --ref 0.1.5 |
--force |
Bypass the Node 24 / npm 11.5.1 precondition check |
Legacy users: if you previously installed via git clone, switch to the registry path once:
npm install -g chapterhouse@latestYour ~/.chapterhouse/ config carries forward automatically.
chapterhouse setupPicks a default model and writes ~/.chapterhouse/.env. Past versions also configured Telegram — that's gone in v2; the web UI is the only client.
If you plan to use the Azure DevOps OKR integration, also set these in ~/.chapterhouse/.env:
ADO_ORG=https://dev.azure.com/your-org
ADO_PROJECT=your-project
ADO_PAT=your-ado-pat-hereADO_ORG and ADO_PROJECT are no longer baked into the app and must match your Azure DevOps tenant.
copilot loginchapterhouse startOr chapterhouse start --open to also pop a browser tab.
Open http://localhost:7788 in your browser and start typing. Examples:
- "Start working on the auth bug in /full/path/to/myapp"
- "What sessions are running?"
- "Check on the api-tests session"
- "What's the capital of France?"
Chapterhouse currently supports two server auth modes:
- Legacy API token — set
API_TOKEN(or persist a token at~/.chapterhouse/api-token). The loopback-only/api/bootstrapendpoint hands that token to the local SPA, which then sends it asAuthorization: Bearer ...on API and SSE requests. - Microsoft Entra ID — set
ENTRA_AUTH_ENABLED=true,ENTRA_TENANT_ID, andENTRA_CLIENT_ID. The web UI uses MSAL in the browser, signs the user in with Microsoft, and sends the ID token to the backend. The backend verifies that token against the tenant JWKS endpoint and treats the token'srolesclaim as the source of truth for app-role authorization.
Optional Entra settings:
ENTRA_REQUIRED_ROLE— if set, the signed-in user must have this app role in the token'srolesclaim. This replaced the older group-based check.ENTRA_TEAM_LEAD_ID— optional for regular engineers, who can omit it entirely. Set it only for the one person who should be treated asteam-leadfor managerial functions such as/api/team/reportand protected OKR/KPI/team wiki writes. Without it, the signed-in user is treated asengineer, which is the correct role for normal team members.
When ENTRA_AUTH_ENABLED=true, Chapterhouse automatically adds a workiq entry to ~/.copilot/mcp-config.json at daemon startup. This gives the orchestrator access to Microsoft 365 tools (Teams, Outlook, Calendar, etc.) via the @microsoft/workiq MCP server without any manual configuration.
The entry uses npx -y @microsoft/workiq so no global npm install is required — npx fetches the server on first use.
| Behaviour | Detail |
|---|---|
| Trigger | ENTRA_AUTH_ENABLED=true + ENTRA_TENANT_ID set |
| Idempotent | Safe to restart; entry is only written if workiq key is absent |
| Opt-out | Set CHAPTERHOUSE_WORKIQ_AUTO_INSTALL=false to disable |
| Failure-safe | If the write fails (permissions, read-only FS), a structured warning is logged and the daemon continues |
CHAPTERHOUSE_WORKIQ_AUTO_INSTALL — true (default) or false. Set to false to manage the workiq MCP entry manually.
For a single-user local deployment, use the personal compose file. It binds port 7788, runs the daemon as the non-root node user, and persists state in CHAPTERHOUSE_HOME (default: $HOME/.chapterhouse on macOS/Linux).
Authentication is required for Docker deployments. Set either API_TOKEN or a working Entra config in .env before starting the container. For Entra, that means ENTRA_AUTH_ENABLED=true, ENTRA_TENANT_ID, and ENTRA_CLIENT_ID; add ENTRA_REQUIRED_ROLE if you want app-role gating. ENTRA_TEAM_LEAD_ID is optional for regular engineers and should only be set for the one person who needs team-lead/managerial permissions. If it is omitted, the user is treated as engineer. The process binds to 0.0.0.0 inside the container so Docker can publish port 7788, and Chapterhouse refuses to start if neither auth mode is configured.
cp .env.example .env # or create .env with your settings
docker compose -f docker-compose.personal.yml up --build -ddocker-compose.personal.yml mounts CHAPTERHOUSE_HOME to /home/node/.chapterhouse, COPILOT_HOME to /home/node/.copilot, GH_CONFIG_HOME to /home/node/.config/gh (read-only), and loads .env, so you can provide API_HOST, API_TOKEN, any ENTRA_* settings, TEAM_CHAPTERHOUSE_* sync settings, and optional Copilot token overrides there.
By default, the compose file falls back to $HOME/.chapterhouse, $HOME/.copilot, and $HOME/.config/gh. On Windows, set the host paths explicitly before starting the stack:
$env:CHAPTERHOUSE_HOME = "$env:USERPROFILE\.chapterhouse"
$env:COPILOT_HOME = "$env:USERPROFILE\.copilot"
$env:GH_CONFIG_HOME = "$env:USERPROFILE\.config\gh"
docker compose -f docker-compose.personal.yml up --build -dYou can also put those variables in a local .env file next to docker-compose.personal.yml if you prefer not to export them in your shell.
Chapterhouse uses the Copilot SDK's default logged-in-user auth flow unless you set an explicit token. In Docker, the recommended setup is:
- Authenticate on the host with
gh auth login. - Start the personal compose stack. The container reuses host GitHub CLI auth through the read-only
GH_CONFIG_HOMEmount. - If you prefer an explicit token, set
COPILOT_TOKENin.env(orGITHUB_TOKENas a fallback). Chapterhouse passes that token to the Copilot SDK directly. Classicghp_personal access tokens are not supported by Copilot.
If auth expires or you want to authenticate from inside the container, run:
docker compose -f docker-compose.personal.yml run --rm -it chapterhouse copilot loginThat login is persisted in COPILOT_HOME, so you only need to repeat it when the Copilot session expires or you want to switch accounts.
For an Azure Container Apps deployment with Azure Container Registry, Azure Files-backed wiki storage, Key Vault, and a managed identity, see deploy/README.md.
The deployment assets for the shared instance set Entra auth and CHAPTERHOUSE_MODE=team. In team mode, Chapterhouse seeds the shared wiki on first start so the initial OKR, KPI, team, and shared pages exist immediately.
| Command | Description |
|---|---|
chapterhouse start |
Start the Chapterhouse daemon (web UI + HTTP API) |
chapterhouse start --open |
Same, plus open the browser |
chapterhouse setup |
Interactive first-run configuration |
chapterhouse update |
Check for and install updates (npm registry for global installs) |
chapterhouse update --check-only |
Print current/latest version without updating |
chapterhouse update --ref <ver> |
Install a specific version |
chapterhouse daemon <sub> |
Manage the persistent background service |
chapterhouse help |
Show available commands |
| Flag | Description |
|---|---|
--self-edit |
Allow Chapterhouse to modify its own source code (use with chapterhouse start) |
--open |
Open the web UI in your default browser when the daemon is ready |
Chapterhouse can run as a persistent user-level background service that starts on login and restarts automatically on crash — no root required.
chapterhouse daemon installThis writes and loads the appropriate unit file for your OS:
| Platform | Unit file |
|---|---|
| macOS | ~/Library/LaunchAgents/com.bketelsen.chapterhouse.plist (launchd) |
| Linux | ~/.config/systemd/user/chapterhouse.service (systemd --user) |
| Windows | Not supported — run chapterhouse start manually or use Task Scheduler |
chapterhouse daemon status # is it running? what PID? where are the logs?
chapterhouse daemon stop # stop without uninstalling
chapterhouse daemon start # start without re-installing
chapterhouse daemon restart # restart in place
chapterhouse daemon logs # tail live logs (Ctrl+C to exit)
chapterhouse daemon uninstall # stop, disable, and remove the unit file| Platform | Location |
|---|---|
| macOS | ~/Library/Logs/chapterhouse.log |
| Linux | journalctl --user -u chapterhouse (no extra config needed) |
Chapterhouse enforces a 3-layer timing contract so in-flight LLM streams can finish cleanly before the process is killed:
| Layer | What it controls | Config | Default |
|---|---|---|---|
| 1 — Orchestrator turn | How long the orchestrator waits per LLM turn | CHAPTERHOUSE_ORCHESTRATOR_TIMEOUT_MS |
1800000 (30 min) |
| 2 — Daemon shutdown grace | How long the daemon waits for in-flight work before force-exiting | CHAPTERHOUSE_SHUTDOWN_TIMEOUT_MS |
60000 (60 s) |
| 3 — systemd kill window | How long systemd waits after SIGTERM before sending SIGKILL | TimeoutStopSec in generated unit |
90 s (fixed) |
Rule: each layer must exceed the one above it. Do not tighten CHAPTERHOUSE_SHUTDOWN_TIMEOUT_MS below CHAPTERHOUSE_ORCHESTRATOR_TIMEOUT_MS, and do not reduce TimeoutStopSec below CHAPTERHOUSE_SHUTDOWN_TIMEOUT_MS.
Each conversation window (browser tab, terminal session) maps to a separate SessionManager with its own queue and SDK session. Two env vars control how long sessions live:
| Config | What it controls | Default |
|---|---|---|
CHAPTERHOUSE_SESSION_IDLE_TTL_MS |
How long an idle session (no in-flight turn, empty queue) is kept before being disconnected | 1800000 (30 min) |
CHAPTERHOUSE_SESSION_MAX_ACTIVE |
Maximum number of simultaneously active sessions; when reached, the least-recently-used idle session is evicted to make room | 20 |
Busy sessions (processing a turn or with items queued) are never evicted by either mechanism.
The generated systemd unit and launchd plist compose a rich PATH that includes:
- The installing shell's
$PATH(captured at install time) - The binary's own directory
- Linuxbrew (
/home/linuxbrew/.linuxbrew/bin), Homebrew (/opt/homebrew/bin,/usr/local/bin) ~/.cargo/bin,~/.bun/bin,~/.volta/bin,~/.local/bin- Standard system paths
This ensures sub-agents and spawned tools can find CLI dependencies even in a headless service context.
The browser app at http://localhost:7788 is split into a few views:
- Chat — streaming markdown chat with the orchestrator. Type
/helpto see slash commands (/cancel,/copy,/clear,/restart, plus shortcuts to other tabs). - Workers — list of running and recent agent tasks. Click one to inspect its description, status, and final output.
- Wiki — browse and edit local wiki pages under
~/.chapterhouse/wiki/pages/. On personal instances, reads of team-scoped paths go through the team wiki sync layer instead of editing the shared source directly. - History — daily conversation summaries written to your personal wiki under
pages/conversations/. - Skills — list of installed skills, broken out by source (bundled / local / global). Local skills can be uninstalled in place.
- Settings — model picker, auto-routing toggle, restart button.
Browser ──HTTP / SSE──► Chapterhouse Daemon
│
Orchestrator Session (Copilot SDK)
│
┌───────────┼───────────┐
Worker 1 Worker 2 Worker N
- Daemon (
chapterhouse start) — persistent service running Copilot SDK + HTTP API + SPA static server. - Web UI — Vite-built React SPA, served by the daemon out of
web/dist/. - Orchestrator — long-running Copilot session with custom tools for session management.
- Workers — child Copilot sessions for specific coding tasks.
Chapterhouse uses two wiki layers on purpose: personal memory stays local, while team knowledge has one authoritative home. That split is what lets an engineer save private notes and conversation history freely without turning shared planning documents into a pile of unsynchronized copies.
Your personal wiki lives at ~/.chapterhouse/wiki/.
- It stores your local preferences, conversation history, personal notes, and the cached copies of team pages fetched from the team deployment.
- It is never shared or synced back to Team Chapterhouse.
- Your local instance can write to it freely, which makes it the right place for agent memory, scratch knowledge, and anything that should not become team policy just because one assistant session mentioned it.
The personal-memory tools still work against this local wiki:
remember— save a fact, preference, project note, or decision into your private wikirecall/wiki_search/wiki_read— search the local wiki index, then open specific pagesforget— remove lines, revise sections, or delete a local page- Index-first context — Chapterhouse injects a ranked wiki summary into prompts so agents see what they already know without loading the full wiki every turn
- Episodic memory — daily conversation summaries are written asynchronously to
pages/conversations/YYYY-MM-DD.md
The team wiki lives on the Team Chapterhouse instance and is the authoritative source for shared operating knowledge. Personal instances sync from it read-only so there is one place to manage OKRs, KPIs, team membership, and shared runbooks.
This is why the more sensitive namespaces are restricted: quarterly targets, KPI definitions, and official team roster data should not drift because several personal instances edited different copies at the same time. Shared notes stay collaborative, but the canonical planning data stays controlled.
On Entra-backed deployments, ENTRA_TEAM_LEAD_ID is optional for regular engineers. When it is set, Chapterhouse marks the signed-in user as team-lead when their token's oid matches that value. If it is omitted or the oid does not match, the authenticated user is treated as an engineer, which is the correct role for normal team members.
Page namespaces and write permissions:
| Path | Who can write | Purpose |
|---|---|---|
pages/okrs/ |
Team lead only | OKR definitions and quarterly targets |
pages/kpis/ |
Team lead only | Team KPI tracking |
pages/team/ |
Team lead only | Team member profiles, onboarding |
pages/shared/ |
Any team member | Notes, decisions, runbooks, shared knowledge |
On a fresh team deployment, Chapterhouse seeds the wiki with starter pages for:
pages/okrs/2026-Q2.mdpages/kpis/team.mdpages/team/index.mdpages/team/onboarding.mdpages/shared/README.md
Set TEAM_CHAPTERHOUSE_URL on a personal instance to enable team wiki sync. Once enabled:
- Requests to the team instance reuse the current browser session's
Authorizationheader automatically;TEAM_CHAPTERHOUSE_TOKENis an optional fallback for non-browser or pre-authenticated flows - Team pages are fetched from the Team Chapterhouse API on demand
- Cached pages are stored locally at
~/.chapterhouse/wiki/.team-cache/ - Cache freshness is controlled by
TEAM_WIKI_CACHE_TTL_MINUTES(default:60) - If Team Chapterhouse is unreachable, Chapterhouse serves stale cached content instead of failing hard
- Synced namespaces come from
TEAM_WIKI_PATHS(default:pages/team,pages/okrs,pages/kpis,pages/shared) - Listing wiki pages in the personal UI triggers a full sync and indexes the fetched team pages locally so they are discoverable in the wiki browser and prompt context
Relevant personal-instance settings:
TEAM_CHAPTERHOUSE_URL=https://your-team-chapterhouse-url.azurecontainerapps.io
TEAM_CHAPTERHOUSE_TOKEN=
TEAM_WIKI_CACHE_TTL_MINUTES=60
TEAM_WIKI_PATHS=pages/team,pages/okrs,pages/kpis,pages/sharedThat cache is there for reliability as much as speed: engineers can keep reading the last known team state even when the shared deployment is temporarily unavailable.
The shared deployment exposes these wiki endpoints:
GET /api/team/wiki— list all team wiki pages under the configured synced namespacesGET /api/team/wiki/:path— read a page (any team member)PUT /api/team/wiki/:path— write a page (subject to namespace permissions)
Personal instances can work with the team wiki through dedicated tools instead of editing the shared source by hand:
write_team_wiki— write shared notes topages/shared/from any personal instanceget_my_okrs— fetch the key results currently owned by the signed-in user from the team wiki
# Clone and install
git clone https://github.com/bketelsen/chapterhouse.git
cd chapterhouse
npm install
npm --prefix web install
# Watch mode (server)
npm run dev
# Watch mode (web UI, in a second terminal — proxies to the daemon on 7788)
npm run dev:web
# Build everything
npm run build
# Run tests
npm test
# Lint user-facing markdown (README, CHANGELOG, docs/, .github/)
npm run lint:mdThe web UI lives in web/. Production builds emit to web/dist/, which the Express server serves out of in src/api/server.ts.
The release flow is: bump version → commit → tag → push (tag triggers npm publish via CI).
# 1. Bump the version in package.json
npm version patch # or minor / major
# 2. Push the commit and the tag
git push origin main --follow-tagsnpm version handles the commit and tag automatically. prepublishOnly runs npm run build before publish so the tarball always contains a fresh build. If you don't have CI set up, publish manually with npm publish after the tag push.
Pre-release gate:
preversionrunsnpm run release:checkautomatically before anynpm versioncall. The script aborts with a clear error if the git working tree is dirty. Stash or commit all changes before bumping the version.
All commits on this repository follow Conventional Commits v1.0.0. The format is <type>(<scope>): <subject> (e.g. feat(api): add session export endpoint). Allowed types: feat, fix, docs, style, refactor, perf, test, chore, build, ci, revert, release.
This is automatically enforced:
- Locally:
huskyinstalls acommit-msggit hook onnpm installthat runscommitlintagainst every commit message. Bad messages are rejected before the commit lands. - On PRs: A GitHub Action (
lint-pr-title.yml) validates the PR title on every open/edit. This matters because squash-merges use the PR title as the commit message onmain.