Skip to content

chipp-ai/dispatch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

438 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dispatch

Tell it what to build. Agents investigate, plan, implement, and ship.

Quick StartScreenshotsDocsMCPContributingLicense


Dispatch sends Claude Code agents to investigate, plan, implement, test, and research across your codebase -- all orchestrated through a terminal UI with live streaming. Every mission runs as a real Claude Code session in GitHub Actions, with structured activity tracking.

What it looks like

Terminal -- Describe what you want done. The orchestrator creates issues, picks a workflow, and dispatches agents.

Dispatch terminal - command center

Orchestrator response -- The agent searches existing missions, scopes the work, and confirms before dispatching.

Dispatch terminal - orchestrator planning a mission

Board -- Track all missions across your pipeline. Issues flow from waiting, to development, to PR, to verification.

Dispatch board - kanban view of missions

GitHub Actions -- Each agent runs as a real Claude Code session. Watch it read files, run commands, and write code.

GitHub Actions - Claude Code agent investigating an issue

What it does

  • Terminal-first orchestration -- Type what you want done in natural language. The orchestrator creates issues, selects workflows, and dispatches agents.
  • 5 workflow types -- Auto-investigate (from error alerts), PRD investigate (exploration + plan), PRD implement (code changes + PR), QA test, and deep research.
  • Plan-review-implement loop -- Agents investigate first, post a structured plan for human review, then implement on approval.
  • Live terminal streaming -- Watch agent output in real-time via SSE as Claude Code reads files, runs commands, and writes code.
  • Customer portal -- Give stakeholders read-only portal links to track their issues and agent activity.
  • MCP server -- Integrate with Claude Code, Cursor, or any MCP-compatible client for bidirectional issue management.
  • Budget & safety controls -- Daily spawn limits, per-issue concurrency, cooldowns, and a kill switch.

Architecture

Terminal UI (Next.js)
    |
    v
Orchestrator (Claude API + tools)
    |
    v
GitHub Actions (Claude Code agents)
    |
    v
Your Codebase (PRs targeting your default branch)
    |
    v
Dispatch API (activity, plans, terminal streaming)

Stack: Next.js 15, PostgreSQL + pgvector, WebSocket (terminal streaming), GitHub Actions (agent runtime), Claude API (orchestrator + agents).

Quick start

1. Clone and install

git clone https://github.com/chipp-ai/dispatch.git
cd dispatch
npm install

2. Start the database

docker-compose up -d db

Or use an existing PostgreSQL instance -- it needs the pgvector extension. If using your own Postgres:

createdb dispatch
psql dispatch -c "CREATE EXTENSION IF NOT EXISTS vector;"

3. Run the init migration

psql postgresql://postgres:postgres@localhost:5432/dispatch -f scripts/migrations/001-init.sql

This creates all 21 tables, enums, and indexes in a single step.

4. Configure environment

cp .env.example .env

Edit .env with your values. Minimum required:

PG_DATABASE_URL=postgresql://postgres:postgres@localhost:5432/dispatch
ANTHROPIC_API_KEY=sk-ant-...    # For orchestrator + agents
DISPATCH_PASSWORD=your-password  # Web UI login
GITHUB_REPO=your-org/your-repo  # Target codebase
GITHUB_TOKEN=ghp_...            # For dispatching agents (see note below)

GitHub token: Must have repo and workflow scopes. Two options:

  • Classic PAT (recommended): Create at github.com/settings/tokens/new with repo and workflow scopes selected.
  • gh auth token: Run gh auth refresh -h github.com -s workflow to add the required workflow scope to your existing token, then use gh auth token to get the value.

Tokens without workflow scope will fail with a 502 Server Error when dispatching agents.

5. Personalize your instance (optional)

Make it yours:

# Set your project name and issue prefix in .env
NEXT_PUBLIC_APP_NAME=MyProject
DEFAULT_ISSUE_PREFIX=ENG

# Generate braille art from your logo (displayed in the terminal)
npm run generate-logo -- path/to/your-logo.png

6. Start the dev server

npm run dev

Open http://localhost:3002 and log in with your DISPATCH_PASSWORD.

Setting up GitHub Actions agents

Dispatch spawns agents by triggering GitHub Actions workflow_dispatch events on your target repository. The agent workflows ship with Dispatch and need to be copied into your target repo.

1. Copy workflows to your target repo

cp -r .github/workflows/{auto-investigate,prd-investigate,prd-implement,qa-test,deep-research}.yml ../your-repo/.github/workflows/

2. Add secrets to your target repo

Go to your target repo's Settings > Secrets and variables > Actions and add:

Secret Required Description
ANTHROPIC_API_KEY Yes Claude API key. Agents use this to run Claude Code sessions during workflows.
DISPATCH_API_URL For callbacks Your Dispatch instance URL (e.g. https://dispatch.yoursite.com). Agents call back to report activity, post plans, and stream terminal output. Without this, agents still run but can't report back.
DISPATCH_API_KEY For callbacks Must match the DISPATCH_API_KEY in your Dispatch .env. Used to authenticate agent callbacks.

Tip: For initial testing, you only need ANTHROPIC_API_KEY. The agent will run and investigate your codebase. Add DISPATCH_API_URL and DISPATCH_API_KEY when you're ready for live terminal streaming and activity tracking.

3. Configure your Dispatch .env

These environment variables control how Dispatch dispatches workflows:

Variable Required Description
GITHUB_TOKEN Yes Must have repo and workflow scopes. Use a classic PAT, or run gh auth refresh -h github.com -s workflow then gh auth token. Without workflow scope, dispatches fail with 502.
GITHUB_REPO Yes Target repo in owner/repo format (e.g. acme/backend).
GITHUB_REF No Branch for workflow dispatch (default: main). Agents check out this branch when they run. Set this to your default branch if it's not main.
NEXT_PUBLIC_GITHUB_REPO No Same owner/repo value. Enables "View on GitHub" links in the UI for workflow runs.

4. Branch configuration

All workflows accept an optional ref input that controls which branch agents check out. The precedence is:

  1. ref input passed per-dispatch (set automatically from GITHUB_REF)
  2. Falls back to github.ref_name (the branch the workflow was dispatched on)
  3. Defaults to main if neither is set

If your default branch is main, no extra configuration is needed. If it's something else (e.g. develop), set GITHUB_REF=develop in your .env.

Preparing your codebase for agents

Dispatch agents are Claude Code sessions that check out your codebase and work autonomously. The single most impactful thing you can do is add a CLAUDE.md file to your repository root. This is the context file that Claude Code reads at the start of every session -- it's how agents understand your project.

CLAUDE.md (required)

Create a CLAUDE.md in your repository root. This file should give an agent everything it needs to navigate, understand, and safely modify your codebase.

What to include:

  • Project overview -- What the project does and its core architecture.
  • Tech stack -- Language, framework, database, key libraries.
  • Project structure -- Directory layout with one-line descriptions.
  • Development commands -- How to run dev server, tests, lint, build. Be specific (e.g. npm run test -- --watch=false, not just "run the tests").
  • Key patterns -- How requests flow, how errors are handled, how auth works, how DB queries are structured, how tests are written.
  • Critical rules -- Things agents must never do (e.g. "never modify migration files that have already run", "always use parameterized queries").
  • Common pitfalls -- Known gotchas, environment quirks, things that look wrong but are intentional.

Why this matters: Without a CLAUDE.md, agents can still read your code, but they'll spend time figuring out basics that you could tell them upfront. A good CLAUDE.md means agents spend more time solving the actual problem and less time orienting themselves.

Subdirectory CLAUDE.md files

For larger codebases, you can add CLAUDE.md files to subdirectories. Claude Code automatically reads these when working in that directory. Useful for:

  • db/CLAUDE.md -- Migration conventions, schema rules, query patterns
  • api/CLAUDE.md -- Route patterns, auth middleware, request validation
  • tests/CLAUDE.md -- Testing conventions, fixture setup, mock patterns

Tips for effective agent context

  1. Be specific about commands. Instead of "run the tests", say npm run test -- --watch=false. Agents execute exactly what you write.

  2. Document your git workflow. Tell agents which branch to target for PRs, whether you use conventional commits, and any CI checks that must pass.

  3. List your non-obvious environment requirements. If your tests need a running database, Redis, or specific env vars, say so. Agents will try to set these up.

  4. Explain what "done" looks like. If every feature needs tests, types, and a migration, say that explicitly. Agents follow the standards you set.

  5. Include error handling patterns. Show agents how you want errors logged and handled. This prevents agents from introducing inconsistent error handling.

  6. Keep it updated. A stale CLAUDE.md is worse than none. When you change conventions, update the file. It's the source of truth for every agent session.

Setting up webhooks

GitHub webhook (PR reconciliation)

  1. Go to your repo's Settings > Webhooks > Add webhook
  2. Payload URL: https://your-dispatch-instance.com/api/github/webhook
  3. Content type: application/json
  4. Secret: Same as GITHUB_WEBHOOK_SECRET in your .env
  5. Events: Pull requests

Sentry webhook (optional)

See Using Sentry or other error trackers in the autonomous error remediation section.

Configuration reference

See .env.example for all available configuration options with descriptions.

Branding

Dispatch is fully white-labelable via environment variables:

Variable Default Description
NEXT_PUBLIC_APP_NAME Dispatch Display name in UI
NEXT_PUBLIC_APP_DESCRIPTION Autonomous Agent Orchestration Platform Subtitle
DEFAULT_ISSUE_PREFIX DISPATCH Issue identifier prefix (e.g. DISPATCH-123)
DEFAULT_WORKSPACE_NAME My Workspace Default workspace name
NEXT_PUBLIC_BRAND_BRAILLE (built-in art) Override terminal braille art via env var

Logo customization

The terminal UI displays braille Unicode art as a brand mark. Generate it from any image:

npm run generate-logo -- path/to/your-logo.png

# Options
npm run generate-logo -- logo.png --width 30      # Wider output
npm run generate-logo -- logo.png --threshold 100  # Adjust brightness cutoff
npm run generate-logo -- logo.png --invert         # Light dots on dark
npm run generate-logo -- logo.png --preview        # Preview without writing file

This writes lib/brand/logo-braille.ts which the terminal components import. Alternatively, set NEXT_PUBLIC_BRAND_BRAILLE as an environment variable for Docker/k8s deployments.

MCP integration

Dispatch exposes an MCP server at /api/mcp. Connect it to Claude Code:

claude mcp add dispatch --transport streamable-http https://your-dispatch-instance.com/api/mcp

Available tools: search_issues, list_issues, get_issue, create_issue, update_issue, dispatch_investigation, dispatch_implementation, post_plan, report_blocker, and more.

Customer portal

Give your customers a branded, read-only portal to track the issues that matter to them. Each customer gets a unique portal link -- no login required, authenticated via a secure token in the URL. Customers see a kanban board of their issues and can drill into issue details with activity timelines.

Why use it

  • Client visibility -- Share real-time issue status with external stakeholders without giving them access to your internal tools.
  • Branded experience -- Each customer's portal uses their brand color for a white-labeled feel.
  • Zero friction -- Portal links work without any signup or login. Share via email, Slack, or embed in your support workflow.
  • Health tracking -- The admin-side customer dashboard gives you a health score, stale issue alerts, and activity metrics per customer.

Creating customers

  1. Navigate to /customers in the sidebar
  2. Click New Customer
  3. Fill in:
    • Name (required) -- The customer or company name. A URL slug is auto-generated (e.g. "Acme Corp" becomes acme-corp).
    • Brand Color -- Hex color used throughout their portal (default: #f9db00).
    • Logo URL -- Optional logo shown in the portal header.
    • Slack Channel ID -- Optional. When set, Dispatch can auto-associate issues from that Slack channel with this customer.

A secure portal token is generated automatically. You can regenerate it at any time from the customer detail page.

Linking issues to customers

Customers see issues where they are added as a watcher. To link issues to a customer:

  • From the issue detail page -- Add the customer as a watcher in the issue sidebar.
  • Via the API -- POST /api/issues/:id with the customer ID in the watcher list.
  • Via MCP -- Use the update_issue tool to add watchers.
  • Automatically via Slack -- When a customer has a slackChannelId configured, issues created from that channel are automatically linked.

Sharing portal links

Click the Portal button on any customer card to copy their portal URL to your clipboard. The URL format is:

https://your-dispatch-instance.com/portal/{customer-slug}?token={portal-token}

Share this link with your customer. They can:

  • View all their issues organized by status in a kanban board
  • Click into any issue to see the full description, activity timeline, and metadata
  • Toggle "Show closed" to see resolved issues

The portal is fully read-only -- customers cannot modify issues, only view them.

Customer health dashboard

The admin-side customer detail page (/customers/:id) provides:

  • Health score (0-100) -- Calculated based on critical issues, stale issues, unresponded items, and recent activity. Scores above 80 are good, 50-80 need attention, below 50 are at risk.
  • Metrics cards -- Total issues, critical issues, average age, and last activity timestamp.
  • Filterable issues table -- Filter by All, Critical, Stale, or Unresponded.
  • Activity feed -- Recent activity across all of the customer's issues.

Customer API

Endpoint Method Description
/api/customers GET List all customers with issue counts
/api/customers POST Create a new customer
/api/customers/:id GET Get customer details
/api/customers/:id PATCH Update customer (name, slug, brand color)
/api/customers/:id DELETE Delete customer (unlinks issues first)
/api/customers/:id?action=regenerate-token POST Regenerate portal token
/api/customers/:id/stats GET Get customer health metrics
/api/portal/:slug?token= GET Public portal data (issues by status)
/api/portal/:slug/issue/:identifier?token= GET Public issue detail

Autonomous error remediation

Dispatch can close the loop on production errors: an error happens, an agent investigates it, opens a fix PR, and verifies the fix stuck. No human needed for the routine stuff.

This is where your CLAUDE.md files really matter. When an error-fix agent checks out your codebase, the first thing it reads is your CLAUDE.md. The better your project context, the better the agent's fixes. An agent with a good CLAUDE.md can trace an error from a stack trace to root cause to fix in minutes.

The loop

Your app (structured JSON logs to stdout)
         |
         v
    Log collector (Promtail, Fluentd, Vector, etc.)
         |
         v
    Loki (log aggregation + storage)
         |
         v
    Grafana (alert rules detect errors)
         |
         v
    Dispatch (/api/loki/webhook)
    - Fingerprint + deduplicate
    - Safety gates (budget, cooldown, concurrency)
         |
         v
    GitHub Actions (Claude Code agent)
    - Reads your CLAUDE.md for project context
    - Investigates root cause from error context
    - Opens fix PR if changes needed
         |
         v
    Fix verification (48h monitoring)
    - Same error reappears? Mark fix as failed
    - No recurrence after 48h? Auto-close issue

Prerequisites

You need three things:

  1. Structured JSON logs -- Your app writes errors to stdout in a format Dispatch can parse (see structured logging below)
  2. Loki + Grafana -- Log aggregation with alerting. This is the supported pipeline. See alternatives if you use something else.
  3. CLAUDE.md in your repo -- Agents need project context to fix errors effectively. See preparing your codebase for agents.

Step 1: Structured logging

Your app must write structured JSON to stdout. Dispatch fingerprints errors using three fields -- source, feature, and msg -- so every error log needs these.

{
  "level": "error",
  "msg": "Failed to process payment: card declined",
  "source": "billing",
  "feature": "charge-customer",
  "timestamp": "2025-01-15T10:30:00.000Z",
  "error": "Stripe error: card_declined",
  "stack": "Error: card_declined\n    at ChargeService.charge (billing.ts:142)\n    at handleCheckout (checkout.ts:58)",
  "userId": "user_abc123",
  "amount": 2500
}
Field Required Purpose
level Yes Severity. Grafana filters on level="error".
msg Yes Human-readable message. Used for fingerprinting and shown to the agent.
source Yes Module or service area (e.g. "billing", "auth"). Used for fingerprinting.
feature Yes Specific operation (e.g. "charge-customer"). Used for fingerprinting.
error / stack Recommended Stack trace helps the agent find the root cause faster.
Entity IDs Recommended userId, orgId, etc. help the agent understand scope.
Example loggers (TypeScript, Python, Go)
// Node.js / TypeScript
function logError(msg: string, context: Record<string, unknown>, error?: Error) {
  console.log(JSON.stringify({
    level: "error",
    msg,
    ...context,
    ...(error && { error: error.message, stack: error.stack }),
    timestamp: new Date().toISOString(),
  }));
}

// Usage
try {
  await chargeCustomer(customerId, amount);
} catch (err) {
  logError("Failed to process payment", {
    source: "billing",
    feature: "charge-customer",
    customerId, amount,
  }, err);
  throw err;
}
# Python
import json, sys, traceback
from datetime import datetime

def log_error(msg, source, feature, error=None, **context):
    entry = {"level": "error", "msg": msg, "source": source, "feature": feature,
             "timestamp": datetime.utcnow().isoformat() + "Z", **context}
    if error:
        entry["error"] = str(error)
        entry["stack"] = traceback.format_exc()
    print(json.dumps(entry), file=sys.stderr)

# Usage
try:
    process_webhook(payload)
except Exception as e:
    log_error("Webhook processing failed", source="stripe-webhook",
              feature="event-routing", error=e, event_type=payload.get("type"))
    raise
// Go -- slog with JSON handler
logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))

logger.Error("Failed to process payment",
    "source", "billing",
    "feature", "charge-customer",
    "customerId", customerId,
    "amount", amount,
    "error", err.Error(),
)

Step 2: Deploy Loki + Grafana

You need Loki for log storage and Grafana for alerting. If you already have these, skip to Step 3.

Docker Compose (simplest for getting started):

# docker-compose.monitoring.yml
services:
  loki:
    image: grafana/loki:3.0.0
    ports: ["3100:3100"]
    command: -config.file=/etc/loki/local-config.yaml

  grafana:
    image: grafana/grafana:11.0.0
    ports: ["3000:3000"]
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    volumes:
      - grafana-data:/var/lib/grafana

  promtail:
    image: grafana/promtail:3.0.0
    volumes:
      - /var/log:/var/log:ro
    command: -config.file=/etc/promtail/config.yml

volumes:
  grafana-data:

Kubernetes: Use the official Grafana Helm charts (grafana/loki, grafana/promtail, grafana/grafana). See the monitoring/ directory in this repo for a production example with GCS storage, Google OAuth, and managed certificates.

Key concept: Your app writes JSON to stdout. A collector (Promtail, Fluentd, Vector) ships those logs to Loki. Grafana queries Loki and fires alerts.

Step 3: Configure Grafana alerting

You need two things in Grafana: an alert rule that detects errors, and a contact point that sends them to Dispatch.

Contact point -- sends alerts to Dispatch's webhook:

  1. In Grafana, go to Alerting > Contact points > New contact point
  2. Type: Webhook
  3. URL: https://your-dispatch-instance.com/api/loki/webhook
  4. HTTP Method: POST
  5. Authorization Header: Bearer <your DISPATCH_API_KEY>

Alert rule -- detects new error categories:

  1. Go to Alerting > Alert rules > New alert rule
  2. Query (Loki data source):
    count by (source, feature, msg) (
      count_over_time({app="your-app", level="error"} | json [15m])
    )
    
  3. Condition: when query result is above 3
  4. Evaluation: every 1 minute, for 0 seconds (fire immediately)
  5. Labels: severity = warning
  6. Annotations:
    • Summary: {{ $labels.source }}/{{ $labels.feature }}: {{ $labels.msg }}
    • Description: {{ $labels.msg }}

Notification policy:

Setting Value Why
Group by alertname, source, feature Deduplicate by error type
Group wait 5 minutes Let events accumulate before first alert
Group interval 1 hour Don't re-send same group within 1h
Repeat interval 24 hours Don't repeat same alert within 24h
Full Grafana provisioning YAML (for Helm/IaC)

If you manage Grafana via Helm or infrastructure-as-code, here's the full alerting config:

alerting:
  rules:
    - orgId: 1
      name: Dispatch Error Alerts
      folder: Dispatch
      interval: 1m
      rules:
        - uid: new-error-category
          title: New Error Category
          condition: C
          data:
            - refId: A
              relativeTimeRange: { from: 900, to: 0 }
              datasourceUid: loki
              model:
                expr: |
                  count by (source, feature, msg) (
                    count_over_time({app="your-app", level="error"} | json [15m])
                  )
                queryType: range
            - refId: C
              relativeTimeRange: { from: 900, to: 0 }
              datasourceUid: __expr__
              model:
                type: threshold
                expression: A
                conditions:
                  - evaluator: { type: gt, params: [3] }
          labels:
            severity: warning
          annotations:
            summary: "{{ $labels.source }}/{{ $labels.feature }}: {{ $labels.msg }}"
            description: "{{ $labels.msg }}"
            error_count: "{{ $values.A }}"

  contactPoints:
    - orgId: 1
      name: dispatch-webhook
      receivers:
        - uid: dispatch-loki
          type: webhook
          settings:
            url: "https://your-dispatch-instance.com/api/loki/webhook"
            httpMethod: POST
            authorization_scheme: Bearer
            authorization_credentials: "your-dispatch-api-key"

  policies:
    - orgId: 1
      receiver: dispatch-webhook
      group_by: ["alertname", "source", "feature"]
      group_wait: 5m
      group_interval: 1h
      repeat_interval: 24h

Step 4: Configure Dispatch safety controls

These env vars prevent runaway agent spending. Start conservative and increase as you build confidence:

# Recommended starting values
MAX_CONCURRENT_SPAWNS_ERROR=3      # Max 3 agents investigating errors at once
DAILY_SPAWN_BUDGET_ERROR=10        # Max 10 auto-spawns per day
SPAWN_COOLDOWN_HOURS=24            # Don't re-investigate same error within 24h
MIN_EVENT_COUNT_TO_SPAWN=3         # Only investigate after 3+ occurrences
SPAWN_DELAY_MINUTES=5              # Wait 5 min before spawning (avoids transient errors)

The Fleet Status panel in the sidebar shows real-time budget usage, active agents, and daily outcomes.

Step 5: Verify the pipeline

  1. Trigger an error in your application
  2. Check Grafana -- the error should appear in Loki within seconds
  3. Wait for the alert rule to fire (check Alerting > Alert rules for firing state)
  4. Check Dispatch -- a new issue should appear on the board
  5. If MIN_EVENT_COUNT_TO_SPAWN is met, an agent should auto-dispatch

How the safety gates work

When Dispatch receives an error alert, it runs through these checks before spawning an agent:

Error alert received
    |
    v
Kill switch enabled? ----yes----> Block (SPAWN_KILL_SWITCH=true)
    |no
    v
Same fingerprint in cooldown? ---yes----> Block (already investigated recently)
    |no
    v
Enough occurrences? ----no-----> Block (below MIN_EVENT_COUNT_TO_SPAWN)
    |yes
    v
At concurrency limit? ---yes----> Block (MAX_CONCURRENT_SPAWNS_ERROR reached)
    |no
    v
Daily budget exhausted? --yes----> Block (DAILY_SPAWN_BUDGET_ERROR reached)
    |no
    v
Dispatch agent

Fix verification

After a fix PR is merged, Dispatch monitors the error fingerprint for 48 hours:

  • Error reappears -- The fix is marked as failed. The issue reopens for another attempt.
  • No recurrence for 48h -- The fix is verified. The issue auto-closes.

This works because the Loki webhook handler checks incoming errors against merged fix PRs. If the same source|feature|msg fingerprint fires again, Dispatch knows the fix didn't stick.

Using Sentry or other error trackers

The built-in pipeline is designed for Loki + Grafana because it gives Dispatch the richest error context (full structured logs with source/feature/msg fields for fingerprinting).

If you use Sentry, Dispatch has a basic integration:

  1. Create an Internal Integration in Sentry
  2. Set webhook URL: https://your-dispatch-instance.com/api/sentry/webhook
  3. Enable events: Issue Created, Issue Resolved

Sentry issues will appear on your Dispatch board, but the autonomous fix loop (auto-spawn, fingerprint dedup, fix verification) only works with the Loki pipeline. Sentry issues must be manually dispatched to agents.

For other error trackers (Datadog, Honeybadger, Rollbar, etc.), you can build a custom webhook handler that transforms their alert payloads into Dispatch issues via the API or MCP tools. The key is extracting source, feature, and msg from whatever format your tracker uses.

Deployment

Docker

docker build -t dispatch .
docker run -p 3002:3002 --env-file .env dispatch

Docker Compose

docker-compose up

Kubernetes

Example manifests are in charts/. See charts/README.md for instructions.

Development

npm run dev            # Start dev server with hot reload
npm run build          # Production build
npm run test           # Run tests
npm run lint           # Lint check
npm run generate-logo  # Generate braille art from an image

Troubleshooting

502 when clicking Investigate / dispatching agents

Your GITHUB_TOKEN is missing the workflow scope. This is the most common setup issue.

# Check your current scopes
gh auth status
# Look for 'workflow' in the Token scopes list

# Add the workflow scope
gh auth refresh -h github.com -s workflow

# Get the new token for your .env
gh auth token

Or create a new classic PAT with repo + workflow scopes.

Agent runs but doesn't report back to Dispatch

The GitHub Actions workflow needs DISPATCH_API_URL and DISPATCH_API_KEY secrets set on your target repo. Without them, the agent still runs and can open PRs, but it can't stream terminal output or post activity updates back to the Dispatch UI.

GITHUB_REPO_OWNER not configured error

Set GITHUB_REPO=owner/repo in your .env (e.g. GITHUB_REPO=acme/backend). The owner and repo name are parsed from this value.

Database migration fails with "extension vector does not exist"

Install pgvector before running migrations:

psql dispatch -c "CREATE EXTENSION IF NOT EXISTS vector;"

If using Docker Compose, the included db service has pgvector pre-installed.

Port 3002 already in use / dev server won't start

# Find and kill the stale process
lsof -ti :3002 | xargs kill -9

# Remove stale lock file if present
rm -f .next/dev/lock

# Restart
npm run dev

Contributing

See CONTRIBUTING.md for development setup, code style, and PR process.

If you're using Claude Code to contribute, Dispatch ships with a CLAUDE.md that gives agents full project context.

License

MIT

About

Autonomous agent orchestration platform

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages