A self-hosted workforce management platform for Claude Code. Manage, monitor, and control Claude Code sessions across multiple machines from a single web dashboard.
Claude HQ Dashboard
(Nuxt 3 + Vuetify 3)
+----------+ +----------+ +----------+
| Session 1| | Session 2| | Session 3| + Queue
| studio-pc| | macbook | | nuc-srv | + Approvals
| running | | running | | queued | + Notifications
+----------+ +----------+ +----------+
|
+------+------+
| Hub API |
| (Fastify) |
| + SQLite |
+------+------+
|
+-----------+-----------+
| | |
+----+----+ +----+----+ +----+----+
| Agent | | Agent | | Agent |
|studio-pc| | macbook | | nuc-srv |
| PTY x2 | | PTY x1 | | PTY x2 |
+---------+ +---------+ +---------+
- Live Terminal Streaming -- Watch Claude Code work in real-time via xterm.js in your browser
- Multi-Machine Management -- Agents on any machine connect to a central Hub over Tailscale
- Session Queue -- Queue tasks with priority ordering, auto-advance when slots free up
- Approval System -- Policy engine auto-approves safe actions, queues risky ones for human review
- Session Replay -- Replay completed sessions with timeline scrubber and speed controls
- Repository Registry -- Register repos, auto-detect dependencies, launch jobs against any codebase
- Job Orchestration -- Full lifecycle: clone repo, create branch, install deps, run agent, commit, create PR
- GitHub Integration -- Auto-create PRs, report status via Checks API, receive webhooks
- Cost Tracking -- Per-session token/cost tracking with daily/monthly budgets
- Scheduled Tasks -- Cron-based recurring prompts
- Notifications -- Webhooks to Discord, Slack, ntfy.sh, or any endpoint
- Docker Deployment -- Single
docker compose upruns everything - Dark/Light Theme -- Vuetify 3 with custom color scheme
- Node.js >= 20
- pnpm >= 9 (installed via corepack:
corepack enable) - Claude Code installed on agent machines (
curl -fsSL https://claude.ai/install.sh | bash) - Tailscale (optional but recommended) for mesh networking between machines
- Docker (optional) for containerized deployment
git clone https://github.com/lasswellt/claudeHQ.git
cd claudeHQ
corepack enable
pnpm installpnpm turbo buildcd packages/hub
node dist/index.jsThe Hub starts on http://localhost:7700. Verify:
curl http://localhost:7700/health
# {"status":"ok","version":"0.1.0","uptime":1.23,"machines":0,"connectedAgents":0}cd packages/dashboard
pnpm devOpens at http://localhost:3000. The dashboard proxies API calls to the Hub automatically.
Create ~/.chq/config.json on each machine that will run Claude Code:
{
"machineId": "my-machine",
"displayName": "My Development Machine",
"hubUrl": "ws://localhost:7700"
}If using Tailscale, replace localhost with the Hub machine's Tailscale IP:
{
"hubUrl": "ws://100.x.x.x:7700"
}cd packages/agent
node dist/cli.js agent startThe agent connects to the Hub, registers, and starts sending heartbeats. You should see the machine appear in the dashboard.
Prerequisites: Docker Engine and Docker Compose installed.
# Step 1: Clone the repository
git clone https://github.com/lasswellt/claudeHQ.git
cd claudeHQ
# Step 2: Create your environment file
cp .env.example .env
# Edit .env if you want to change the port or log level
# Step 3: Create data directories (Docker will mount these)
mkdir -p data/db data/recordings
# Step 4: Build the Docker image
# This uses a multi-stage build: installs deps, builds TypeScript,
# copies only production artifacts into the final image (~250MB)
docker compose build
# Step 5: Start the Hub
docker compose up -d
# Step 6: Verify it's running
curl http://localhost:7700/health
# {"status":"ok","version":"0.1.0","uptime":1.23,"machines":0,"connectedAgents":0}
# Step 7: Open the dashboard
# Navigate to http://localhost:7700 in your browser
# (In production, the Hub serves the dashboard's static files directly)Useful commands:
docker compose logs -f # Watch logs
docker compose down # Stop
docker compose up -d # Start again (data persists in ./data/)
docker compose build --pull # Rebuild with latest base imageIf you're running on a Windows machine with WSL2:
# Step 1: Open your WSL2 terminal (Ubuntu recommended)
# Make sure Docker is available inside WSL2:
docker --version # Should show Docker version
docker compose version # Should show Compose version
# If Docker isn't installed in WSL2, install it:
# Option A: Docker Desktop (enable WSL2 backend in Docker Desktop settings)
# Option B: Docker Engine directly in WSL2:
sudo apt-get update
sudo apt-get install -y docker.io docker-compose-v2
sudo usermod -aG docker $USER
# Log out and back in for group to take effect
# Step 2: Clone the repo INSIDE WSL2's filesystem (NOT /mnt/c/)
# This is critical for performance — /mnt/c/ is 10-50x slower
cd ~
git clone https://github.com/lasswellt/claudeHQ.git
cd claudeHQ
# Step 3: Create environment and data directories
cp .env.example .env
mkdir -p data/db data/recordings
# Step 4: Build and start
docker compose build
docker compose up -d
# Step 5: Verify
curl http://localhost:7700/health
# Step 6: Access from Windows browser
# The dashboard is available at http://localhost:7700
# WSL2 automatically forwards ports to Windows (localhost forwarding)WSL2-specific notes:
- Always clone repos to the Linux filesystem (
~/projects/), NOT/mnt/c/Users/.... The Windows mount is dramatically slower for I/O-intensive operations like npm install and SQLite. - If
localhostforwarding doesn't work, find your WSL2 IP:hostname -Iand use that IP in the browser. - To auto-start the Hub on Windows boot, create a Windows Task Scheduler task that runs
wsl -d Ubuntu -- docker compose -f ~/claudeHQ/docker-compose.yml up -d. - Docker Desktop's WSL2 backend is easiest. If using Docker Engine directly in WSL2, make sure systemd is enabled (
[boot] systemd=truein/etc/wsl.conf).
If you want agents on other machines to connect to the Hub:
# On the Hub machine:
cd claudeHQ
# Edit .env to add your Tailscale auth key
# Get one at: https://login.tailscale.com/admin/settings/keys
echo "TS_AUTHKEY=tskey-auth-xxxxx" >> .env
# Use the Tailscale-enabled compose file
docker compose -f docker-compose.yml up -d
# The Hub will be accessible at its Tailscale IP (100.x.x.x:7700)
# and at claude-hq.<your-tailnet>.ts.net if using Tailscale ServeCompose with Tailscale sidecar:
services:
tailscale:
image: tailscale/tailscale:latest
hostname: claude-hq
environment:
- TS_AUTHKEY=${TS_AUTHKEY}
- TS_STATE_DIR=/var/lib/tailscale
volumes:
- ts-state:/var/lib/tailscale
devices: ["/dev/net/tun:/dev/net/tun"]
cap_add: [net_admin, sys_module]
restart: unless-stopped
hub:
build: { context: ., dockerfile: Dockerfile.hub }
network_mode: service:tailscale # Shares Tailscale's network
environment:
- CHQ_HUB_PORT=7700
volumes:
- ./data/db:/app/data/db
- ./data/recordings:/app/data/recordings
depends_on: [tailscale]
restart: unless-stopped
volumes:
ts-state:The Agent runs on machines where you want Claude Code to execute. It does NOT run inside the Hub's Docker container — it runs separately (bare metal, its own Docker container, or in WSL2).
# On the agent machine (must have Claude Code installed):
# 1. Install Claude Code
curl -fsSL https://claude.ai/install.sh | bash
# 2. Install Node.js >= 20
# (use nvm, fnm, or system package manager)
# 3. Clone Claude HQ and build the agent
git clone https://github.com/lasswellt/claudeHQ.git
cd claudeHQ
corepack enable
pnpm install
pnpm turbo build --filter=@chq/shared --filter=@chq/agent
# 4. Configure the agent
mkdir -p ~/.chq
cat > ~/.chq/config.json << 'EOF'
{
"machineId": "my-agent-machine",
"displayName": "My Agent Machine",
"hubUrl": "ws://HUB_IP_OR_TAILSCALE_IP:7700",
"maxConcurrentSessions": 2
}
EOF
# 5. Start the agent
cd packages/agent
node dist/cli.js agent start
# Agent will connect to the Hub and appear in the dashboardThe Dockerfile.hub uses a multi-stage build:
Stage 1: turbo prune — Extract only hub + shared packages from the monorepo
Stage 2: pnpm install — Install dependencies (with native module compilation)
Stage 3: turbo build — Compile TypeScript
Stage 4: prod deps — Install production-only dependencies
Stage 5: runtime — node:22-bookworm-slim with built artifacts + dashboard static files
Final image: ~250MB. Base is Debian (not Alpine) because better-sqlite3 needs glibc.
| Variable | Default | Description |
|---|---|---|
CHQ_HUB_PORT |
7700 |
HTTP/WebSocket port |
CHQ_HUB_HOST |
0.0.0.0 |
Bind address |
CHQ_HUB_DATABASEPATH |
./data/db/chq.db |
SQLite database path |
CHQ_HUB_RECORDINGSPATH |
./data/recordings |
JSONL recording storage |
CHQ_HUB_LOGLEVEL |
info |
Log level (fatal/error/warn/info/debug/trace) |
CHQ_HUB_DASHBOARDSTATICPATH |
- | Path to dashboard static files (production) |
{
"machineId": "studio-pc",
"displayName": "Studio PC",
"hubUrl": "ws://100.x.x.x:7700",
"claudeBinary": "claude",
"defaultFlags": [],
"defaultCwd": "/home/user/projects",
"maxConcurrentSessions": 2,
"recordingChunkIntervalMs": 100,
"recordingUploadIntervalMs": 5000
}| Field | Required | Default | Description |
|---|---|---|---|
machineId |
Yes | - | Unique machine identifier |
hubUrl |
Yes | - | WebSocket URL of the Hub |
displayName |
No | machineId | Human-readable name |
claudeBinary |
No | claude |
Path to Claude Code binary |
defaultFlags |
No | [] |
Default CLI flags for sessions |
maxConcurrentSessions |
No | 2 |
Max parallel PTY sessions |
| Page | URL | Description |
|---|---|---|
| Overview | / |
Machine cards, recent sessions, New Session button |
| Jobs | /jobs |
Job lifecycle tracking (pending, running, completed) |
| Repos | /repos |
Repository registry, import from GitHub URL |
| Pull Requests | /prs |
PRs created by agents, review/CI status |
| Sessions | /sessions |
Search, filter, session history |
| Session Detail | /sessions/:id |
Live terminal, metadata, input bar, kill/resume |
| Session Replay | /sessions/:id/replay |
Playback with timeline, speed controls |
| Session Grid | /sessions/grid |
2x2 or 1x4 multi-terminal view |
| Machines | /machines |
Machine cards with health sparklines |
| Machine Detail | /machines/:id |
Sessions, health charts, queue |
| Queue | /queues |
Per-machine task queues, add/remove/reorder |
| Approvals | /approvals |
Pending approval requests, bulk actions |
| Scheduled Tasks | /scheduled-tasks |
Cron-based recurring tasks |
| Costs | /costs |
Today/week/month spend, cost by repo/machine |
| Settings | /settings/approval-policies |
Approval policy rules |
| GitHub | /settings/github |
GitHub App setup wizard |
Claude HQ can auto-create pull requests when jobs complete.
- Navigate to Settings > GitHub in the dashboard
- Click Create GitHub App -- this uses the GitHub manifest flow
- GitHub creates the app and returns credentials automatically
- Install the app on your repositories
- Claude HQ handles token rotation automatically
- Create a fine-grained PAT at
github.com/settings/personal-access-tokens/new - Grant permissions: Contents (write), Pull Requests (write), Issues (write)
- Enter the token in Settings > GitHub
- Note: No webhooks, no Checks API with PAT
Claude HQ includes a policy engine that auto-resolves safe actions and queues risky ones:
| Priority | Rule | Action |
|---|---|---|
| 10 | Read, Glob, Grep, LS, View | Auto-approve |
| 20 | Bash: rm -rf, sudo, curl|bash | Auto-deny |
| 30 | Bash: ls, git status, npm test | Auto-approve |
| 40 | Write, Edit (code files) | Auto-approve |
| 50 | All other Bash commands | Require approval |
| 1000 | Default (everything else) | Require approval |
Configure rules in Settings > Approval Policies or via POST /api/approval-policies.
GET /api/sessions List sessions (?machine=X&status=running)
GET /api/sessions/:id Session detail
POST /api/sessions Start session { machineId, prompt, cwd }
DELETE /api/sessions/:id Kill session
POST /api/sessions/:id/input Send PTY input { input: "yes\n" }
POST /api/sessions/:id/resume Resume with follow-up { prompt }
GET /api/sessions/:id/recording Stream JSONL recording
GET /api/machines List machines
GET /api/machines/:id Machine detail + sessions
GET /api/machines/:id/health Health history (?hours=24)
GET /api/jobs List jobs (?repoId=X&status=running)
POST /api/jobs Create job { repoId, title, prompt }
POST /api/jobs/:id/cancel Cancel job
POST /api/jobs/:id/create-pr Create PR from job
POST /api/jobs/batch Batch create { repoIds[], prompt }
GET /api/repos List repositories
POST /api/repos Register repo
POST /api/repos/import Import from GitHub URL { url }
PUT /api/repos/:id Update repo config
See full API at GET /health and explore via the dashboard.
packages/
shared/ Zod schemas, TypeScript types, WebSocket protocol
agent/ Node.js daemon: PTY pool, WS client, recorder, git ops,
Docker/SSH spawn, queue, scrubber, devcontainer detection
hub/ Fastify server: SQLite (9 migrations), REST API (10 route files),
WS relay, approval engine, notifications, GitHub client, cron, costs
dashboard/ Nuxt 3 SPA: Vuetify 3, xterm.js, 17 pages, Pinia stores
agent --> shared (allowed)
hub --> shared (allowed)
dashboard --> shared (allowed, browser entrypoint only)
* --> * (forbidden between agent/hub/dashboard)
# Install dependencies
pnpm install
# Build all packages
pnpm turbo build
# Run tests (36 tests)
npx vitest run
# Start Hub in dev mode
cd packages/hub && node dist/index.js
# Start Dashboard in dev mode (with API proxy)
cd packages/dashboard && pnpm dev
# Lint
pnpm lint
# Format
pnpm formatmake build # docker compose build
make up # docker compose up -d
make down # docker compose down
make logs # docker compose logs -f hub
make status # show health + container status
make backup # sqlite3 .backup
make test # vitest run- Default-deny approval system -- unresolved tool calls are denied, not allowed
- No
--dangerously-skip-permissionsby default -- must be explicitly configured per agent - Shell injection protected -- all git operations use
execFileSyncwith argument arrays - SQL injection protected -- parameterized prepared statements everywhere, column whitelisting
- GitHub webhook verification -- HMAC-SHA256 signature validation
- Zod validation -- all WebSocket messages and API payloads validated at the boundary
- Secrets never in Hub DB -- agent secrets resolved locally via env/file references
- Tailscale network boundary -- no public exposure required
MIT