Production-ready multi-agent orchestration built on OpenCode's flexible agent system.
Transform OpenCode from a single-agent TUI into a programmable flow platform with:
- π Cost optimization - 99% savings via intelligent model routing (DeepSeek, Gemini, local models)
- π Parallel execution - Spawn 10+ specialized agents working simultaneously
- π§ Shared memory - Cross-agent coordination via custom MCP tools
- π¦ Deploy anywhere - Local, Docker, Kubernetes - runs wherever OpenCode runs
Before using OpenCode Flow, you need:
- Node.js 20+ - Download
- At least one LLM provider API key:
- Anthropic Claude (recommended)
- OpenRouter (100+ models, cheaper)
- Google Gemini
Don't have OpenCode yet? No problem! The setup wizard will install it for you.
# Install OpenCode Flow
npm install -g opencode-flow
# Run interactive setup wizard
opencode-flow setup
# The wizard will:
# β Install OpenCode CLI (if needed)
# β Start OpenCode server
# β Configure your API keys
# β Validate everything works
# β Run a test agent# 1. Install OpenCode CLI
npm install -g @opencode/cli
# 2. Start OpenCode server (keep this running in Terminal 1)
opencode serve --port 4096
# 3. In Terminal 2: Install OpenCode Flow
npm install -g opencode-flow
# 4. Configure environment
export ANTHROPIC_API_KEY=sk-ant-...
export OPENCODE_SERVER_URL=http://localhost:4096
# 5. Run your first flow
opencode-flow --agent researcher --task "Analyze microservices architecture"# Spawn agents
opencode-flow spawn --name researcher --agent general --model gemini-2.5-flash
opencode-flow spawn --name coder --agent build --model claude-sonnet-4
# Execute task across agents
opencode-flow exec \
--task "Build a REST API with authentication" \
--agents researcher,coder \
--mode sequentialimport { OpencodeFlow } from 'opencode-flow';
const flow = new OpencodeFlow({
modelRouter: { mode: 'cost', maxCostPerTask: 0.05 }
});
// Spawn specialized agents
const researcher = await flow.spawn({
name: 'researcher',
agent: 'general',
model: 'gemini-2.5-flash'
});
const coder = await flow.spawn({
name: 'coder',
agent: 'build',
model: 'claude-sonnet-4'
});
// Execute in parallel
const results = await flow.execute({
task: 'Build REST API with auth',
agents: ['researcher', 'coder'],
mode: 'parallel'
});
console.log(`Total cost: $${results.reduce((sum, r) => sum + r.cost, 0)}`);OpenCode is an excellent AI coding agent with:
- β Clean agent system (Build, Plan, custom agents)
- β HTTP server API for programmatic access
- β MCP tool support
- β Multi-provider flexibility
But it lacks:
- β Multi-agent flow orchestration
- β Cost-optimized model routing
- β Cross-agent coordination
- β Production deployment patterns
OpenCode Flow is a lightweight TypeScript wrapper that adds:
- Multi-Agent Spawning - Create specialized agents programmatically
- Model Router - Auto-select optimal models (cost vs quality)
- Shared Memory - Cross-agent coordination via MCP tools
- Production Ready - Docker, Kubernetes, health checks
We're not replacing OpenCode - we're unlocking its full potential. π
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β OpenCode Flow CLI/SDK β
β Spawn agents, execute tasks, coordinate via memory β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FlowOrchestrator β
β - Agent spawning - Model routing β
β - Task distribution - Result aggregation β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ βΌ βΌ
ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ
β Agent Instance β β Agent Instance β β Agent Instance β
β researcher β β coder β β reviewer β
β gemini-2.5 β β claude-sonnet-4 β β deepseek-r1 β
ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ
βΌ βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β OpenCode HTTP Server (Port 4096) β
β Session management, messaging, MCP tools β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Automatically select the best model for each task based on cost/quality tradeoffs.
const flow = new OpencodeFlow({
modelRouter: {
mode: 'cost', // cost, quality, balanced, speed
maxCostPerTask: 0.05, // Budget cap
fallbackChain: [ // Fallback on errors
'anthropic/claude-sonnet-4',
'deepseek/deepseek-r1',
'google/gemini-2.5-flash'
]
}
});
const results = await flow.execute({
task: 'Code review',
agents: ['reviewer'],
optimize: 'cost' // Auto-selects DeepSeek R1 (85% cheaper)
});Real cost savings:
- Without router: 100 reviews/day Γ $0.08 = $240/month
- With router (DeepSeek): 100 reviews/day Γ $0.012 = $36/month
- Savings: $204/month (85%)
Parallel - All agents work simultaneously:
await flow.executeParallel(
'Analyze this codebase',
['researcher', 'coder', 'reviewer']
);Sequential - Output from one feeds into next:
await flow.executeSequential(
'Build authentication system',
['researcher', 'architect', 'coder', 'tester']
);Hierarchical - Coordinator delegates to workers:
await flow.executeHierarchical(
'Microservices architecture',
'architect', // Coordinator
['coder1', 'coder2'] // Workers
);Custom MCP tools enable cross-agent data sharing:
// Agent 1: Store API design
await flow.memory.set('api-design', {
endpoints: ['/users', '/auth'],
database: 'postgresql'
}, 3600); // TTL: 1 hour
// Agent 2: Retrieve and implement
const design = await flow.memory.get('api-design');
await flow.execute({
task: `Implement these endpoints: ${JSON.stringify(design)}`,
agents: ['coder']
});Backends:
- File-based (default, good for dev)
- Redis (production, low latency)
- Custom (implement
MemoryBackendinterface)
See agent output token-by-token as it's generated:
opencode-flow exec \
--task "Write documentation" \
--agents coder \
--streamfor await (const chunk of flow.executeStream(task, agents)) {
console.log(chunk.agent, chunk.text);
}Docker:
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm ci --production
CMD ["opencode-flow", "serve", "--port", "5000"]docker run -p 5000:5000 \
-e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
opencode-flow:latestKubernetes:
apiVersion: batch/v1
kind: Job
metadata:
name: flow-code-review
spec:
template:
spec:
containers:
- name: flow
image: opencode-flow:latest
args: ["exec", "--task", "Review PR #123", "--agents", "reviewer"]
env:
- name: ANTHROPIC_API_KEY
valueFrom: { secretKeyRef: { name: llm-keys, key: anthropic } }const reviewer = await flow.spawn({
name: 'reviewer',
agent: 'plan', // Read-only
model: 'deepseek-r1',
systemPrompt: 'Review code for security, performance, best practices'
});
const results = await flow.execute({
task: `Review this PR: ${prDiff}`,
agents: ['reviewer'],
optimize: 'cost'
});Cost: $0.012 per review (vs $0.08 with Claude) = 85% savings
// Phase 1: Research
const research = await flow.execute({
task: 'Research best practices for REST API authentication',
agents: ['researcher'],
optimize: 'cost' // Uses Gemini 2.5 Flash
});
// Phase 2: Design
await flow.memory.set('research', research[0].output);
const design = await flow.execute({
task: 'Design API based on research',
agents: ['architect']
});
// Phase 3: Implement
const code = await flow.execute({
task: 'Implement API endpoints',
agents: ['coder'],
optimize: 'quality' // Uses Claude Sonnet 4
});
// Phase 4: Test
await flow.execute({
task: 'Write comprehensive tests',
agents: ['tester'],
optimize: 'cost'
});const agents = ['security-scanner', 'code-reviewer', 'dependency-checker'];
const results = await flow.executeParallel(
'Perform comprehensive security audit',
agents
);
// Aggregate findings
const allFindings = results.map(r => r.output).flat();
console.log(`Found ${allFindings.length} issues`);- Project Specification - Vision, goals, architecture
- Implementation Design - Technical details, algorithms
- API Reference - Complete API documentation
- Deployment Guide - Docker, Kubernetes, production
- Project setup
- FlowClient wrapper
- Agent spawning & lifecycle management
- Complete CLI (6 commands)
- Parallel/Sequential/Hierarchical execution
- Interactive setup wizard
- File-based shared memory
- Comprehensive test suite (31 unit tests)
- Port agentic-flow router logic
- Cost tracking & optimization
- Fallback chains
- Provider configurations
- Custom MCP memory tool
- Child session orchestration
- Event-driven coordination
- Docker deployment
- Kubernetes manifests
- Example flows
- Documentation & launch
See Quick Start above for installation instructions.
flow.config.json:
{
"serverUrl": "http://localhost:4096",
"modelRouter": {
"mode": "balanced",
"maxCostPerTask": 0.1,
"fallbackChain": [
"anthropic/claude-sonnet-4",
"deepseek/deepseek-r1"
]
},
"memory": {
"backend": "redis",
"url": "redis://localhost:6379"
},
"agents": [
{
"name": "researcher",
"agent": "general",
"model": "gemini-2.5-flash"
}
]
}Contributions welcome! See CONTRIBUTING.md for guidelines.
MIT License - see LICENSE for details.
Built with:
- OpenCode - Flexible AI coding agent
- Agentic Flow - Model router inspiration
- Model Context Protocol - Tool integration
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Discord: Join our Discord
Deploy multi-agent flows in seconds. Optimize costs automatically. Scale to production. π
opencode-flow exec --task "Your task here" --agents researcher,coder --optimize cost