MCP server for Claude Code to Claude Code delegation. Spawn background agents that run asynchronously, with optional git worktree isolation for safe parallel work.
- Async agent spawning - Fire-and-forget pattern with spool IDs
- Optional blocking with gather/yield - Wait for all results at once, or stream them as agents complete. Alternatively, agent can continue other work, spins are nonblocking by default
- Permission profiles - Control what tools child agents can use (readonly, careful, full)
- Shard isolation - Run agents in sandboxed git worktrees to prevent conflicts
- Model selection - Route tasks to haiku, sonnet, or opus per-agent
- Session continuity - Resume conversations with child agents (auto-recovers expired sessions)
- Rich querying - Search, filter, peek at running output, export results
- Python 3.10+
- Claude CLI installed and authenticated
- Git (for shard/worktree functionality)
pip install spindle-mcpAdd to Claude Code's MCP config (~/.claude.json):
{
"mcpServers": {
"spindle": {
"command": "spindle"
}
}
}# Spawn an agent
spool_id = spin("Research the Python GIL")
# Do other work...
# Check result
result = unspool(spool_id)
Control what tools the spawned agent can use:
# Read-only: Can only search and read
spin("Analyze the codebase", permission="readonly")
# Careful (default): Can read/write but limited bash
spin("Fix this bug", permission="careful")
# Full access: No restrictions
spin("Implement the feature", permission="full")
# Shard: Full access + auto-isolated worktree (common for risky work)
spin("Refactor the auth system", permission="shard")
# Careful + shard: Limited tools but isolated
spin("Update configs", permission="careful+shard")
Profiles:
readonly: Read, Grep, Glob, safe bash (ls, cat, git status/log/diff)careful: Read, Write, Edit, Grep, Glob, bash for git/make/pytest/python/npmfull: No restrictionsshard: Full access + auto-creates isolated worktreecareful+shard: Careful permissions + auto-creates isolated worktree
You can also pass explicit allowed_tools to override the profile.
Run agents in isolated git worktrees to prevent conflicts:
# Agent works in its own worktree
spool_id = spin("Refactor auth module", shard=True)
# Check shard status
shard_status(spool_id)
# Merge changes back when done
shard_merge(spool_id)
# Or discard if not needed
shard_abandon(spool_id)
Shards create a git worktree + branch. If SKEIN is available, uses skein shard spawn for richer tracking. Falls back to plain git worktree otherwise.
# Spawn multiple agents
id1 = spin("Find all TODO comments")
id2 = spin("List unused imports")
id3 = spin("Check for type errors")
# Gather: block until all complete, get all results
results = spin_wait("id1,id2,id3", mode="gather")
# Yield: return as each completes
# Great when results are independent - process each as it lands
result = spin_wait("id1,id2,id3", mode="yield") # Returns first to finish
# With timeout
results = spin_wait("id1,id2", mode="gather", timeout=300)
Yield mode keeps you responsive instead of blocking on the slowest agent.
Simple timed waiting with spin_sleep:
spin_sleep("90m") # Sleep for 90 minutes
spin_sleep("2h") # Sleep for 2 hours
spin_sleep("30s") # Sleep for 30 seconds
spin_sleep("06:00") # Wait until 6 AM
Or use spin_wait with the time parameter:
spin_wait(time="90m")
spin_wait(time="06:00") # Handles next-day wraparound
Useful for periodic check-in loops (e.g., QM/dancing partner patterns).
# Route quick tasks to haiku (fast, cheap)
spin("Summarize this file", model="haiku")
# Complex work to opus
spin("Design the new architecture", model="opus")
# Auto-kill if it takes too long
spin("Should be quick", timeout=60)
# Get session ID from completed spool
result = unspool(spool_id) # includes session_id
# Continue that conversation
new_id = respin(session_id, "Follow up question")
If the session has expired on Claude's end, respin automatically falls back to transcript injection to recreate context.
spin_drop(spool_id)
spools()
# Search prompts and results
spool_search("authentication")
# Filter by status and time
spool_results(status="error", since="1h")
# Regex search results
spool_grep("error|failed|exception")
# Get statistics
spool_stats()
# Export to file
spool_export("all", format="md")
Spindle supports multiple AI agent harnesses, allowing you to choose the best tool for each task.
Claude Code (default) - Anthropic's Claude models via claude CLI
- Superior code understanding and reasoning
- Best for complex refactoring, architecture decisions
- Slower startup (~3-4 minutes to first response)
- Use
harness="claude-code"or omit harness parameter
Codex CLI - OpenAI's GPT-5 Codex models via codex CLI
- Extremely fast startup (~10 seconds to first response)
- Good for quick edits, simple tasks, prototyping
- Requires ChatGPT Plus/Pro/Enterprise
- Use
harness="codex"
# Claude Code (default) - best for complex work
spool_id = spin("Refactor the auth module to use dependency injection")
# Codex CLI - fast for simple tasks
spool_id = spin(
prompt="Add error handling to this function",
harness="codex",
working_dir="/path/to/project"
)
# All harnesses use the same API
result = unspool(spool_id) # Auto-detects harness
respin(session_id, "Follow up") # Auto-detects harnessUse Claude Code when:
- Task requires deep reasoning or architecture decisions
- Working on complex refactoring across multiple files
- Need thorough code review or analysis
- Time isn't critical (can wait 3-4 minutes)
Use Codex when:
- Need quick edits or simple implementations
- Prototyping or exploring ideas rapidly
- Running many parallel tasks (faster = more throughput)
- Time is critical (10 second startup vs 3-4 minutes)
Claude Code:
- Claude CLI installed and authenticated
- Anthropic API key or Claude subscription
Codex CLI:
- Codex CLI installed (
npm i -g @openai/codex) - ChatGPT Plus/Pro/Enterprise subscription
- Codex CLI authenticated
- Linux kernel 5.13+ for sandbox support (automatically bypassed on older kernels)
See docs/MULTI_HARNESS_GUIDE.md and docs/CODEX_SETUP.md for detailed documentation.
| Tool | Purpose |
|---|---|
spin(prompt, permission?, shard?, system_prompt?, working_dir?, allowed_tools?, tags?, model?, timeout?, harness?) |
Spawn agent (Claude Code or Codex), return spool_id |
unspool(spool_id) |
Get result (auto-detects harness, non-blocking) |
respin(session_id, prompt) |
Continue session (auto-detects harness) |
spin() parameters:
prompt(required): The task for the agentharness(optional): "claude-code" (default) or "codex"working_dir(optional for Claude, required for Codex): Project directorypermission(optional): "readonly", "careful" (default), "full", "shard", "careful+shard"model(optional): Model to use ("sonnet", "opus", "haiku" for Claude; "gpt-5-codex" for Codex)timeout(optional): Auto-kill after N secondstags(optional): Comma-separated tags for organizationshard(optional): Create isolated git worktree (can also usepermission="shard")system_prompt(optional): Custom system prompt for Claude Codeallowed_tools(optional): Override permission profile with explicit tool list
| Tool | Purpose |
|---|---|
spools() |
List all spools |
spin_wait(spool_ids?, mode?, timeout?, time?) |
Block until spools complete, or wait for duration |
spin_sleep(duration) |
Sleep for a duration (90m, 2h, 30s, HH:MM) |
spin_drop(spool_id) |
Cancel by killing process |
spool_search(query, field?) |
Search prompts/results |
spool_results(status?, since?, limit?) |
Bulk fetch with filters |
spool_grep(pattern) |
Regex search results |
spool_retry(spool_id) |
Re-run with same params |
spool_peek(spool_id, lines?) |
See partial output while running |
spool_dashboard() |
Overview of running/complete/needs-attention |
spool_stats() |
Get summary statistics |
spool_export(spool_ids, format?, output_path?) |
Export to file |
shard_status(spool_id) |
Check shard worktree status |
shard_merge(spool_id, keep_branch?) |
Merge shard to master |
shard_abandon(spool_id, keep_branch?) |
Discard shard |
Spools persist to ~/.spindle/spools/{spool_id}.json:
{
"id": "abc12345",
"status": "complete",
"prompt": "...",
"result": "...",
"session_id": "...",
"permission": "careful",
"allowed_tools": "...",
"tags": ["batch-1"],
"shard": {
"worktree_path": "/path/to/worktrees/abc12345-...",
"branch_name": "shard-abc12345-...",
"shard_id": "..."
},
"pid": 12345,
"created_at": "2025-11-26T...",
"completed_at": "2025-11-26T..."
}spindle install-service # Install background service (Linux/macOS)
spindle start # Start via systemd (or background if no service)
spindle reload # Restart via systemd to pick up code changes
spindle status # Check if running (hits /health endpoint)
spindle serve --http # Run MCP server directlyFor persistent background operation:
# Install and enable the service (Linux or macOS)
spindle install-service
# Start it
spindle startLinux: Writes a systemd user service to ~/.config/systemd/user/spindle.service
macOS: Writes a launchd plist to ~/Library/LaunchAgents/com.spindle.server.plist and loads it immediately
Use --force to overwrite an existing service file. Then spindle reload restarts the service to pick up code changes.
On Windows, run spindle manually:
spindle serve --httpOr use NSSM to create a Windows service.
In WSL2 with systemd enabled, spindle install-service works like native Linux. If systemd isn't enabled, you'll get instructions to enable it or run manually.
From within Claude Code, call spindle_reload() to restart the server and pick up code changes.
Environment variables:
| Variable | Default | Description |
|---|---|---|
SPINDLE_MAX_CONCURRENT |
15 |
Maximum concurrent spools |
Storage location: ~/.spindle/spools/
- spin() spawns a detached
claudeCLI process with the given prompt - The process runs in background, writing output to temporary files
- A monitor thread polls for completion
- unspool() returns the result once complete (non-blocking check)
- Spool metadata persists to JSON files, surviving server restarts
For shards:
- A git worktree is created with a new branch
- The agent runs inside that worktree
- After completion, merge back with
shard_merge()or discard withshard_abandon()
- Max 15 concurrent spools (configurable via
SPINDLE_MAX_CONCURRENT) - 24h auto-cleanup of old spools
- Orphaned spools (dead process) marked as error on restart
See CONTRIBUTING.md for development setup and guidelines.
MIT - see LICENSE.