Tell it what to build. Agents investigate, plan, implement, and ship.
Quick Start • Screenshots • Docs • MCP • Contributing • License
Dispatch sends Claude Code agents to investigate, plan, implement, test, and research across your codebase -- all orchestrated through a terminal UI with live streaming. Every mission runs as a real Claude Code session in GitHub Actions, with structured activity tracking.
Terminal -- Describe what you want done. The orchestrator creates issues, picks a workflow, and dispatches agents.
Orchestrator response -- The agent searches existing missions, scopes the work, and confirms before dispatching.
Board -- Track all missions across your pipeline. Issues flow from waiting, to development, to PR, to verification.
GitHub Actions -- Each agent runs as a real Claude Code session. Watch it read files, run commands, and write code.
- Terminal-first orchestration -- Type what you want done in natural language. The orchestrator creates issues, selects workflows, and dispatches agents.
- 5 workflow types -- Auto-investigate (from error alerts), PRD investigate (exploration + plan), PRD implement (code changes + PR), QA test, and deep research.
- Plan-review-implement loop -- Agents investigate first, post a structured plan for human review, then implement on approval.
- Live terminal streaming -- Watch agent output in real-time via SSE as Claude Code reads files, runs commands, and writes code.
- Customer portal -- Give stakeholders read-only portal links to track their issues and agent activity.
- MCP server -- Integrate with Claude Code, Cursor, or any MCP-compatible client for bidirectional issue management.
- Budget & safety controls -- Daily spawn limits, per-issue concurrency, cooldowns, and a kill switch.
Terminal UI (Next.js)
|
v
Orchestrator (Claude API + tools)
|
v
GitHub Actions (Claude Code agents)
|
v
Your Codebase (PRs targeting your default branch)
|
v
Dispatch API (activity, plans, terminal streaming)
Stack: Next.js 15, PostgreSQL + pgvector, WebSocket (terminal streaming), GitHub Actions (agent runtime), Claude API (orchestrator + agents).
git clone https://github.com/chipp-ai/dispatch.git
cd dispatch
npm installdocker-compose up -d dbOr use an existing PostgreSQL instance -- it needs the pgvector extension. If using your own Postgres:
createdb dispatch
psql dispatch -c "CREATE EXTENSION IF NOT EXISTS vector;"psql postgresql://postgres:postgres@localhost:5432/dispatch -f scripts/migrations/001-init.sqlThis creates all 21 tables, enums, and indexes in a single step.
cp .env.example .envEdit .env with your values. Minimum required:
PG_DATABASE_URL=postgresql://postgres:postgres@localhost:5432/dispatch
ANTHROPIC_API_KEY=sk-ant-... # For orchestrator + agents
DISPATCH_PASSWORD=your-password # Web UI login
GITHUB_REPO=your-org/your-repo # Target codebase
GITHUB_TOKEN=ghp_... # For dispatching agents (see note below)GitHub token: Must have
repoandworkflowscopes. Two options:
- Classic PAT (recommended): Create at github.com/settings/tokens/new with
repoandworkflowscopes selected.gh authtoken: Rungh auth refresh -h github.com -s workflowto add the requiredworkflowscope to your existing token, then usegh auth tokento get the value.Tokens without
workflowscope will fail with a 502 Server Error when dispatching agents.
Make it yours:
# Set your project name and issue prefix in .env
NEXT_PUBLIC_APP_NAME=MyProject
DEFAULT_ISSUE_PREFIX=ENG
# Generate braille art from your logo (displayed in the terminal)
npm run generate-logo -- path/to/your-logo.pngnpm run devOpen http://localhost:3002 and log in with your DISPATCH_PASSWORD.
Dispatch spawns agents by triggering GitHub Actions workflow_dispatch events on your target repository. The agent workflows ship with Dispatch and need to be copied into your target repo.
cp -r .github/workflows/{auto-investigate,prd-investigate,prd-implement,qa-test,deep-research}.yml ../your-repo/.github/workflows/Go to your target repo's Settings > Secrets and variables > Actions and add:
| Secret | Required | Description |
|---|---|---|
ANTHROPIC_API_KEY |
Yes | Claude API key. Agents use this to run Claude Code sessions during workflows. |
DISPATCH_API_URL |
For callbacks | Your Dispatch instance URL (e.g. https://dispatch.yoursite.com). Agents call back to report activity, post plans, and stream terminal output. Without this, agents still run but can't report back. |
DISPATCH_API_KEY |
For callbacks | Must match the DISPATCH_API_KEY in your Dispatch .env. Used to authenticate agent callbacks. |
Tip: For initial testing, you only need
ANTHROPIC_API_KEY. The agent will run and investigate your codebase. AddDISPATCH_API_URLandDISPATCH_API_KEYwhen you're ready for live terminal streaming and activity tracking.
These environment variables control how Dispatch dispatches workflows:
| Variable | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Yes | Must have repo and workflow scopes. Use a classic PAT, or run gh auth refresh -h github.com -s workflow then gh auth token. Without workflow scope, dispatches fail with 502. |
GITHUB_REPO |
Yes | Target repo in owner/repo format (e.g. acme/backend). |
GITHUB_REF |
No | Branch for workflow dispatch (default: main). Agents check out this branch when they run. Set this to your default branch if it's not main. |
NEXT_PUBLIC_GITHUB_REPO |
No | Same owner/repo value. Enables "View on GitHub" links in the UI for workflow runs. |
All workflows accept an optional ref input that controls which branch agents check out. The precedence is:
refinput passed per-dispatch (set automatically fromGITHUB_REF)- Falls back to
github.ref_name(the branch the workflow was dispatched on) - Defaults to
mainif neither is set
If your default branch is main, no extra configuration is needed. If it's something else (e.g. develop), set GITHUB_REF=develop in your .env.
Dispatch agents are Claude Code sessions that check out your codebase and work autonomously. The single most impactful thing you can do is add a CLAUDE.md file to your repository root. This is the context file that Claude Code reads at the start of every session -- it's how agents understand your project.
Create a CLAUDE.md in your repository root. This file should give an agent everything it needs to navigate, understand, and safely modify your codebase.
What to include:
- Project overview -- What the project does and its core architecture.
- Tech stack -- Language, framework, database, key libraries.
- Project structure -- Directory layout with one-line descriptions.
- Development commands -- How to run dev server, tests, lint, build. Be specific (e.g.
npm run test -- --watch=false, not just "run the tests"). - Key patterns -- How requests flow, how errors are handled, how auth works, how DB queries are structured, how tests are written.
- Critical rules -- Things agents must never do (e.g. "never modify migration files that have already run", "always use parameterized queries").
- Common pitfalls -- Known gotchas, environment quirks, things that look wrong but are intentional.
Why this matters: Without a CLAUDE.md, agents can still read your code, but they'll spend time figuring out basics that you could tell them upfront. A good CLAUDE.md means agents spend more time solving the actual problem and less time orienting themselves.
For larger codebases, you can add CLAUDE.md files to subdirectories. Claude Code automatically reads these when working in that directory. Useful for:
db/CLAUDE.md-- Migration conventions, schema rules, query patternsapi/CLAUDE.md-- Route patterns, auth middleware, request validationtests/CLAUDE.md-- Testing conventions, fixture setup, mock patterns
-
Be specific about commands. Instead of "run the tests", say
npm run test -- --watch=false. Agents execute exactly what you write. -
Document your git workflow. Tell agents which branch to target for PRs, whether you use conventional commits, and any CI checks that must pass.
-
List your non-obvious environment requirements. If your tests need a running database, Redis, or specific env vars, say so. Agents will try to set these up.
-
Explain what "done" looks like. If every feature needs tests, types, and a migration, say that explicitly. Agents follow the standards you set.
-
Include error handling patterns. Show agents how you want errors logged and handled. This prevents agents from introducing inconsistent error handling.
-
Keep it updated. A stale
CLAUDE.mdis worse than none. When you change conventions, update the file. It's the source of truth for every agent session.
- Go to your repo's Settings > Webhooks > Add webhook
- Payload URL:
https://your-dispatch-instance.com/api/github/webhook - Content type:
application/json - Secret: Same as
GITHUB_WEBHOOK_SECRETin your.env - Events: Pull requests
See Using Sentry or other error trackers in the autonomous error remediation section.
See .env.example for all available configuration options with descriptions.
Dispatch is fully white-labelable via environment variables:
| Variable | Default | Description |
|---|---|---|
NEXT_PUBLIC_APP_NAME |
Dispatch |
Display name in UI |
NEXT_PUBLIC_APP_DESCRIPTION |
Autonomous Agent Orchestration Platform |
Subtitle |
DEFAULT_ISSUE_PREFIX |
DISPATCH |
Issue identifier prefix (e.g. DISPATCH-123) |
DEFAULT_WORKSPACE_NAME |
My Workspace |
Default workspace name |
NEXT_PUBLIC_BRAND_BRAILLE |
(built-in art) | Override terminal braille art via env var |
The terminal UI displays braille Unicode art as a brand mark. Generate it from any image:
npm run generate-logo -- path/to/your-logo.png
# Options
npm run generate-logo -- logo.png --width 30 # Wider output
npm run generate-logo -- logo.png --threshold 100 # Adjust brightness cutoff
npm run generate-logo -- logo.png --invert # Light dots on dark
npm run generate-logo -- logo.png --preview # Preview without writing fileThis writes lib/brand/logo-braille.ts which the terminal components import. Alternatively, set NEXT_PUBLIC_BRAND_BRAILLE as an environment variable for Docker/k8s deployments.
Dispatch exposes an MCP server at /api/mcp. Connect it to Claude Code:
claude mcp add dispatch --transport streamable-http https://your-dispatch-instance.com/api/mcpAvailable tools: search_issues, list_issues, get_issue, create_issue, update_issue, dispatch_investigation, dispatch_implementation, post_plan, report_blocker, and more.
Give your customers a branded, read-only portal to track the issues that matter to them. Each customer gets a unique portal link -- no login required, authenticated via a secure token in the URL. Customers see a kanban board of their issues and can drill into issue details with activity timelines.
- Client visibility -- Share real-time issue status with external stakeholders without giving them access to your internal tools.
- Branded experience -- Each customer's portal uses their brand color for a white-labeled feel.
- Zero friction -- Portal links work without any signup or login. Share via email, Slack, or embed in your support workflow.
- Health tracking -- The admin-side customer dashboard gives you a health score, stale issue alerts, and activity metrics per customer.
- Navigate to
/customersin the sidebar - Click New Customer
- Fill in:
- Name (required) -- The customer or company name. A URL slug is auto-generated (e.g. "Acme Corp" becomes
acme-corp). - Brand Color -- Hex color used throughout their portal (default:
#f9db00). - Logo URL -- Optional logo shown in the portal header.
- Slack Channel ID -- Optional. When set, Dispatch can auto-associate issues from that Slack channel with this customer.
- Name (required) -- The customer or company name. A URL slug is auto-generated (e.g. "Acme Corp" becomes
A secure portal token is generated automatically. You can regenerate it at any time from the customer detail page.
Customers see issues where they are added as a watcher. To link issues to a customer:
- From the issue detail page -- Add the customer as a watcher in the issue sidebar.
- Via the API --
POST /api/issues/:idwith the customer ID in the watcher list. - Via MCP -- Use the
update_issuetool to add watchers. - Automatically via Slack -- When a customer has a
slackChannelIdconfigured, issues created from that channel are automatically linked.
Click the Portal button on any customer card to copy their portal URL to your clipboard. The URL format is:
https://your-dispatch-instance.com/portal/{customer-slug}?token={portal-token}
Share this link with your customer. They can:
- View all their issues organized by status in a kanban board
- Click into any issue to see the full description, activity timeline, and metadata
- Toggle "Show closed" to see resolved issues
The portal is fully read-only -- customers cannot modify issues, only view them.
The admin-side customer detail page (/customers/:id) provides:
- Health score (0-100) -- Calculated based on critical issues, stale issues, unresponded items, and recent activity. Scores above 80 are good, 50-80 need attention, below 50 are at risk.
- Metrics cards -- Total issues, critical issues, average age, and last activity timestamp.
- Filterable issues table -- Filter by All, Critical, Stale, or Unresponded.
- Activity feed -- Recent activity across all of the customer's issues.
| Endpoint | Method | Description |
|---|---|---|
/api/customers |
GET | List all customers with issue counts |
/api/customers |
POST | Create a new customer |
/api/customers/:id |
GET | Get customer details |
/api/customers/:id |
PATCH | Update customer (name, slug, brand color) |
/api/customers/:id |
DELETE | Delete customer (unlinks issues first) |
/api/customers/:id?action=regenerate-token |
POST | Regenerate portal token |
/api/customers/:id/stats |
GET | Get customer health metrics |
/api/portal/:slug?token= |
GET | Public portal data (issues by status) |
/api/portal/:slug/issue/:identifier?token= |
GET | Public issue detail |
Dispatch can close the loop on production errors: an error happens, an agent investigates it, opens a fix PR, and verifies the fix stuck. No human needed for the routine stuff.
This is where your CLAUDE.md files really matter. When an error-fix agent checks out your codebase, the first thing it reads is your CLAUDE.md. The better your project context, the better the agent's fixes. An agent with a good CLAUDE.md can trace an error from a stack trace to root cause to fix in minutes.
Your app (structured JSON logs to stdout)
|
v
Log collector (Promtail, Fluentd, Vector, etc.)
|
v
Loki (log aggregation + storage)
|
v
Grafana (alert rules detect errors)
|
v
Dispatch (/api/loki/webhook)
- Fingerprint + deduplicate
- Safety gates (budget, cooldown, concurrency)
|
v
GitHub Actions (Claude Code agent)
- Reads your CLAUDE.md for project context
- Investigates root cause from error context
- Opens fix PR if changes needed
|
v
Fix verification (48h monitoring)
- Same error reappears? Mark fix as failed
- No recurrence after 48h? Auto-close issue
You need three things:
- Structured JSON logs -- Your app writes errors to stdout in a format Dispatch can parse (see structured logging below)
- Loki + Grafana -- Log aggregation with alerting. This is the supported pipeline. See alternatives if you use something else.
- CLAUDE.md in your repo -- Agents need project context to fix errors effectively. See preparing your codebase for agents.
Your app must write structured JSON to stdout. Dispatch fingerprints errors using three fields -- source, feature, and msg -- so every error log needs these.
{
"level": "error",
"msg": "Failed to process payment: card declined",
"source": "billing",
"feature": "charge-customer",
"timestamp": "2025-01-15T10:30:00.000Z",
"error": "Stripe error: card_declined",
"stack": "Error: card_declined\n at ChargeService.charge (billing.ts:142)\n at handleCheckout (checkout.ts:58)",
"userId": "user_abc123",
"amount": 2500
}| Field | Required | Purpose |
|---|---|---|
level |
Yes | Severity. Grafana filters on level="error". |
msg |
Yes | Human-readable message. Used for fingerprinting and shown to the agent. |
source |
Yes | Module or service area (e.g. "billing", "auth"). Used for fingerprinting. |
feature |
Yes | Specific operation (e.g. "charge-customer"). Used for fingerprinting. |
error / stack |
Recommended | Stack trace helps the agent find the root cause faster. |
| Entity IDs | Recommended | userId, orgId, etc. help the agent understand scope. |
Example loggers (TypeScript, Python, Go)
// Node.js / TypeScript
function logError(msg: string, context: Record<string, unknown>, error?: Error) {
console.log(JSON.stringify({
level: "error",
msg,
...context,
...(error && { error: error.message, stack: error.stack }),
timestamp: new Date().toISOString(),
}));
}
// Usage
try {
await chargeCustomer(customerId, amount);
} catch (err) {
logError("Failed to process payment", {
source: "billing",
feature: "charge-customer",
customerId, amount,
}, err);
throw err;
}# Python
import json, sys, traceback
from datetime import datetime
def log_error(msg, source, feature, error=None, **context):
entry = {"level": "error", "msg": msg, "source": source, "feature": feature,
"timestamp": datetime.utcnow().isoformat() + "Z", **context}
if error:
entry["error"] = str(error)
entry["stack"] = traceback.format_exc()
print(json.dumps(entry), file=sys.stderr)
# Usage
try:
process_webhook(payload)
except Exception as e:
log_error("Webhook processing failed", source="stripe-webhook",
feature="event-routing", error=e, event_type=payload.get("type"))
raise// Go -- slog with JSON handler
logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
logger.Error("Failed to process payment",
"source", "billing",
"feature", "charge-customer",
"customerId", customerId,
"amount", amount,
"error", err.Error(),
)You need Loki for log storage and Grafana for alerting. If you already have these, skip to Step 3.
Docker Compose (simplest for getting started):
# docker-compose.monitoring.yml
services:
loki:
image: grafana/loki:3.0.0
ports: ["3100:3100"]
command: -config.file=/etc/loki/local-config.yaml
grafana:
image: grafana/grafana:11.0.0
ports: ["3000:3000"]
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana-data:/var/lib/grafana
promtail:
image: grafana/promtail:3.0.0
volumes:
- /var/log:/var/log:ro
command: -config.file=/etc/promtail/config.yml
volumes:
grafana-data:Kubernetes: Use the official Grafana Helm charts (grafana/loki, grafana/promtail, grafana/grafana). See the monitoring/ directory in this repo for a production example with GCS storage, Google OAuth, and managed certificates.
Key concept: Your app writes JSON to stdout. A collector (Promtail, Fluentd, Vector) ships those logs to Loki. Grafana queries Loki and fires alerts.
You need two things in Grafana: an alert rule that detects errors, and a contact point that sends them to Dispatch.
Contact point -- sends alerts to Dispatch's webhook:
- In Grafana, go to Alerting > Contact points > New contact point
- Type: Webhook
- URL:
https://your-dispatch-instance.com/api/loki/webhook - HTTP Method: POST
- Authorization Header:
Bearer <your DISPATCH_API_KEY>
Alert rule -- detects new error categories:
- Go to Alerting > Alert rules > New alert rule
- Query (Loki data source):
count by (source, feature, msg) ( count_over_time({app="your-app", level="error"} | json [15m]) ) - Condition: when query result is above 3
- Evaluation: every 1 minute, for 0 seconds (fire immediately)
- Labels:
severity = warning - Annotations:
- Summary:
{{ $labels.source }}/{{ $labels.feature }}: {{ $labels.msg }} - Description:
{{ $labels.msg }}
- Summary:
Notification policy:
| Setting | Value | Why |
|---|---|---|
| Group by | alertname, source, feature |
Deduplicate by error type |
| Group wait | 5 minutes | Let events accumulate before first alert |
| Group interval | 1 hour | Don't re-send same group within 1h |
| Repeat interval | 24 hours | Don't repeat same alert within 24h |
Full Grafana provisioning YAML (for Helm/IaC)
If you manage Grafana via Helm or infrastructure-as-code, here's the full alerting config:
alerting:
rules:
- orgId: 1
name: Dispatch Error Alerts
folder: Dispatch
interval: 1m
rules:
- uid: new-error-category
title: New Error Category
condition: C
data:
- refId: A
relativeTimeRange: { from: 900, to: 0 }
datasourceUid: loki
model:
expr: |
count by (source, feature, msg) (
count_over_time({app="your-app", level="error"} | json [15m])
)
queryType: range
- refId: C
relativeTimeRange: { from: 900, to: 0 }
datasourceUid: __expr__
model:
type: threshold
expression: A
conditions:
- evaluator: { type: gt, params: [3] }
labels:
severity: warning
annotations:
summary: "{{ $labels.source }}/{{ $labels.feature }}: {{ $labels.msg }}"
description: "{{ $labels.msg }}"
error_count: "{{ $values.A }}"
contactPoints:
- orgId: 1
name: dispatch-webhook
receivers:
- uid: dispatch-loki
type: webhook
settings:
url: "https://your-dispatch-instance.com/api/loki/webhook"
httpMethod: POST
authorization_scheme: Bearer
authorization_credentials: "your-dispatch-api-key"
policies:
- orgId: 1
receiver: dispatch-webhook
group_by: ["alertname", "source", "feature"]
group_wait: 5m
group_interval: 1h
repeat_interval: 24hThese env vars prevent runaway agent spending. Start conservative and increase as you build confidence:
# Recommended starting values
MAX_CONCURRENT_SPAWNS_ERROR=3 # Max 3 agents investigating errors at once
DAILY_SPAWN_BUDGET_ERROR=10 # Max 10 auto-spawns per day
SPAWN_COOLDOWN_HOURS=24 # Don't re-investigate same error within 24h
MIN_EVENT_COUNT_TO_SPAWN=3 # Only investigate after 3+ occurrences
SPAWN_DELAY_MINUTES=5 # Wait 5 min before spawning (avoids transient errors)The Fleet Status panel in the sidebar shows real-time budget usage, active agents, and daily outcomes.
- Trigger an error in your application
- Check Grafana -- the error should appear in Loki within seconds
- Wait for the alert rule to fire (check Alerting > Alert rules for firing state)
- Check Dispatch -- a new issue should appear on the board
- If
MIN_EVENT_COUNT_TO_SPAWNis met, an agent should auto-dispatch
When Dispatch receives an error alert, it runs through these checks before spawning an agent:
Error alert received
|
v
Kill switch enabled? ----yes----> Block (SPAWN_KILL_SWITCH=true)
|no
v
Same fingerprint in cooldown? ---yes----> Block (already investigated recently)
|no
v
Enough occurrences? ----no-----> Block (below MIN_EVENT_COUNT_TO_SPAWN)
|yes
v
At concurrency limit? ---yes----> Block (MAX_CONCURRENT_SPAWNS_ERROR reached)
|no
v
Daily budget exhausted? --yes----> Block (DAILY_SPAWN_BUDGET_ERROR reached)
|no
v
Dispatch agent
After a fix PR is merged, Dispatch monitors the error fingerprint for 48 hours:
- Error reappears -- The fix is marked as failed. The issue reopens for another attempt.
- No recurrence for 48h -- The fix is verified. The issue auto-closes.
This works because the Loki webhook handler checks incoming errors against merged fix PRs. If the same source|feature|msg fingerprint fires again, Dispatch knows the fix didn't stick.
The built-in pipeline is designed for Loki + Grafana because it gives Dispatch the richest error context (full structured logs with source/feature/msg fields for fingerprinting).
If you use Sentry, Dispatch has a basic integration:
- Create an Internal Integration in Sentry
- Set webhook URL:
https://your-dispatch-instance.com/api/sentry/webhook - Enable events: Issue Created, Issue Resolved
Sentry issues will appear on your Dispatch board, but the autonomous fix loop (auto-spawn, fingerprint dedup, fix verification) only works with the Loki pipeline. Sentry issues must be manually dispatched to agents.
For other error trackers (Datadog, Honeybadger, Rollbar, etc.), you can build a custom webhook handler that transforms their alert payloads into Dispatch issues via the API or MCP tools. The key is extracting source, feature, and msg from whatever format your tracker uses.
docker build -t dispatch .
docker run -p 3002:3002 --env-file .env dispatchdocker-compose upExample manifests are in charts/. See charts/README.md for instructions.
npm run dev # Start dev server with hot reload
npm run build # Production build
npm run test # Run tests
npm run lint # Lint check
npm run generate-logo # Generate braille art from an image502 when clicking Investigate / dispatching agents
Your GITHUB_TOKEN is missing the workflow scope. This is the most common setup issue.
# Check your current scopes
gh auth status
# Look for 'workflow' in the Token scopes list
# Add the workflow scope
gh auth refresh -h github.com -s workflow
# Get the new token for your .env
gh auth tokenOr create a new classic PAT with repo + workflow scopes.
Agent runs but doesn't report back to Dispatch
The GitHub Actions workflow needs DISPATCH_API_URL and DISPATCH_API_KEY secrets set on your target repo. Without them, the agent still runs and can open PRs, but it can't stream terminal output or post activity updates back to the Dispatch UI.
GITHUB_REPO_OWNER not configured error
Set GITHUB_REPO=owner/repo in your .env (e.g. GITHUB_REPO=acme/backend). The owner and repo name are parsed from this value.
Database migration fails with "extension vector does not exist"
Install pgvector before running migrations:
psql dispatch -c "CREATE EXTENSION IF NOT EXISTS vector;"If using Docker Compose, the included db service has pgvector pre-installed.
Port 3002 already in use / dev server won't start
# Find and kill the stale process
lsof -ti :3002 | xargs kill -9
# Remove stale lock file if present
rm -f .next/dev/lock
# Restart
npm run devSee CONTRIBUTING.md for development setup, code style, and PR process.
If you're using Claude Code to contribute, Dispatch ships with a CLAUDE.md that gives agents full project context.



