简体中文 • 日本語 • 한국어 • Español • Deutsch • Dansk
The definitive catalog of AI agent patterns. One definition, any framework.
Quick Start • Agent Catalog • Smart Enhancements • Patterns • Playground • Docs • Contributing
50+ battle-tested agent definitions. 11 categories. 8 orchestration patterns. 6 framework adapters. 5 export targets. Zero lock-in.
multi-agent is a framework-agnostic catalog of production-ready AI agent patterns. Define your agents once in YAML, run them on CrewAI, LangGraph, OpenAI Agents SDK, Claude SDK, Google ADK, or smolagents.
"57% of enterprise agent failures are orchestration failures, not model failures." — Anthropic, 2026
Stop reinventing agents. Start composing them.
| multi-agent | CrewAI | LangGraph | OpenAI SDK | Claude SDK | |
|---|---|---|---|---|---|
| Framework-agnostic definitions | Yes | No | No | No | No |
| Export to any AI platform | Yes | No | No | No | No |
| Agent catalog with 50+ roles | Yes | ~10 | ~5 | ~3 | ~5 |
| Pattern library (8 patterns) | Yes | 2 | 3 | 2 | 2 |
| Built-in cost estimation | Yes | No | No | No | No |
| Agent recommendation engine | Yes | No | No | No | No |
| Works with any LLM | Yes | Yes | Yes | OpenAI only | Claude only |
| MCP native | Yes | Partial | Adapter | Yes | Yes |
| Lines of core code | ~600 | 18K | 25K | 8K | 12K |
pip install multi-agentmultiagent search "code review"Found 3 agents matching "code review":
code/code-reviewer Review PRs for bugs, style, and security
code/test-writer Generate tests for changed code
code/refactorer Suggest and apply refactoring improvements
Recommended pattern: supervisor-worker (1 reviewer + N specialists)
Estimated cost: ~$0.03/review (Claude Haiku) to ~$0.25/review (GPT-4o)
from multiagent import Catalog, patterns
# Load agents from the catalog
catalog = Catalog()
reviewer = catalog.load("code/code-reviewer")
test_writer = catalog.load("code/test-writer")
# Compose with a pattern
team = patterns.supervisor_worker(
supervisor=reviewer,
workers=[test_writer],
model="claude-sonnet-4-6" # or any model
)
result = team.run("Review this PR and write missing tests", context={
"diff": open("changes.diff").read()
})# CrewAI adapter
from multiagent.adapters import crewai
crew = crewai.from_catalog(["code/code-reviewer", "code/test-writer"])
result = crew.kickoff()
# LangGraph adapter
from multiagent.adapters import langgraph
graph = langgraph.from_catalog(["research/deep-researcher", "research/fact-checker"])
result = graph.invoke({"query": "Latest AI agent frameworks"})
# OpenAI Agents SDK adapter
from multiagent.adapters import openai_sdk
agent = openai_sdk.from_catalog("code/code-reviewer")
result = agent.run("Review this code")# Export for Claude Code (.agents/ skills)
multiagent export code/code-reviewer claude-code -o .agents/skills
# Export for Codex / OpenClaw (AGENTS.md format)
multiagent export code/code-reviewer codex
# Export for Google Gemini / ADK
multiagent export code/code-reviewer gemini -o ./adk-agents
# Export for ChatGPT (Custom GPT instructions)
multiagent export code/code-reviewer chatgpt
# Export just the system prompt (works with ANY LLM)
multiagent export code/code-reviewer raw
# Bulk export all agents in a category
multiagent export-all claude-code -o .agents/skills -c code| Target | Format | Works With |
|---|---|---|
claude-code |
.md skill files |
Claude Code, Claude Desktop |
codex |
AGENTS.md sections | OpenAI Codex, OpenClaw |
gemini |
ADK YAML config | Google Gemini, Vertex AI |
chatgpt |
System instructions | ChatGPT, Custom GPTs |
raw |
Plain system prompt | Any LLM — Ollama, LM Studio, llama.cpp, vLLM, etc. |
Every agent is defined in a simple, readable YAML format:
# catalog/code/code-reviewer.yaml
name: code-reviewer
version: "1.0"
description: Reviews code changes for bugs, security issues, and style violations
category: code
system_prompt: |
You are an expert code reviewer. Analyze the provided code changes and identify:
1. Bugs and logic errors
2. Security vulnerabilities (OWASP Top 10)
3. Performance issues
4. Style and readability improvements
Be specific. Reference line numbers. Suggest fixes.
tools:
- type: mcp
server: filesystem
- type: function
name: search_codebase
description: Search for related code in the repository
parameters:
temperature: 0.1
max_tokens: 4096
cost_profile:
tokens_per_run: ~2000
recommended_model: claude-haiku-4-5 # Best cost/quality for reviews
estimated_cost_usd: 0.003
works_with:
- code/test-writer # Generate tests for flagged code
- code/refactorer # Apply suggested improvements
recommended_patterns:
- supervisor-worker # Reviewer supervises specialist agents
- sequential # Review → Test → Refactor pipeline| Category | Agents | Description |
|---|---|---|
| code/ | code-reviewer code-generator test-writer refactorer debugger security-auditor documentation-writer pr-summarizer |
Software development lifecycle |
| research/ | deep-researcher web-scraper fact-checker paper-analyst competitive-intel |
Research and analysis |
| data/ | data-analyst sql-generator report-writer |
Data engineering and analysis |
| devops/ | ci-cd-agent infra-provisioner monitoring-agent incident-responder |
Infrastructure and operations |
| content/ | writer editor translator seo-optimizer |
Content creation pipeline |
| finance/ | trading-analyst portfolio-optimizer financial-reporter fraud-detector tax-advisor |
Financial analysis and compliance |
| support/ | customer-support ticket-router knowledge-base-builder escalation-agent |
Customer service pipeline |
| legal/ | contract-reviewer legal-researcher compliance-checker document-drafter |
Legal and compliance |
| personal/ | email-assistant meeting-scheduler note-taker task-manager |
Personal productivity |
| security/ | vulnerability-scanner log-analyzer access-reviewer incident-analyst |
Security operations |
| orchestration/ | task-router cost-optimizer quality-gate |
Meta-agents for coordination |
Eight battle-tested orchestration patterns, each with runnable examples:
┌─────────────┐
│ Supervisor │ ── Pattern 1: Supervisor/Worker
└──────┬──────┘ Central agent delegates to specialists
┌─────┼─────┐
▼ ▼ ▼
[W1] [W2] [W3]
[A] → [B] → [C] ── Pattern 2: Sequential Pipeline
Linear chain of specialized agents
┌──→ [A] ──┐
│ │
├──→ [B] ──┼──→ [Merge] ── Pattern 3: Parallel Fan-Out
│ │ Independent tasks run concurrently
└──→ [C] ──┘
[A] ←──→ [B] ── Pattern 4: Reflection/Loop
Iterative refinement between agents
[A] ──handoff──→ [B] ──handoff──→ [C] ── Pattern 5: Handoff
Agent transfers full control
┌─[A]─┐ ── Pattern 6: Group Chat
│ [B] │ ← Selector Shared conversation, dynamic speaker
└─[C]─┘
[A]─┬─[B] ── Pattern 7: DAG (Directed Acyclic Graph)
└─[C]──[D] Conditional branching and merging
┌Worktree1: [A]┐ ── Pattern 8: Split-and-Merge
│Worktree2: [B]│→ Git Merge Isolated parallel work, merged at end
└Worktree3: [C]┘
| Pattern | When to Use | Complexity | Latency | Example |
|---|---|---|---|---|
| Supervisor/Worker | Complex tasks with clear subtasks | Medium | Medium | Code review team |
| Sequential | Step-by-step processing | Low | High | Content pipeline |
| Parallel | Independent tasks | Low | Low | Multi-source research |
| Reflection | Quality-critical output | Medium | Medium | Legal document drafting |
| Handoff | Escalation and routing | Low | Low | Customer support tiers |
| Group Chat | Brainstorming, debate | High | High | Design review |
| DAG | Complex conditional workflows | High | Variable | CI/CD pipeline |
| Split-and-Merge | Large parallel code changes | Medium | Low | Multi-file refactoring |
multi-agent provides first-class adapters for the top frameworks:
| Framework | Adapter | Status | Stars |
|---|---|---|---|
| CrewAI | multiagent.adapters.crewai |
Stable | 44K |
| LangGraph | multiagent.adapters.langgraph |
Stable | 25K |
| OpenAI Agents SDK | multiagent.adapters.openai_sdk |
Stable | 21K |
| Claude Agent SDK | multiagent.adapters.claude_sdk |
Stable | — |
| Google ADK | multiagent.adapters.google_adk |
Beta | 18K |
| smolagents | multiagent.adapters.smolagents |
Beta | 26K |
Don't see your framework? Submit an adapter.
Built for the 2026 protocol stack:
| Protocol | Purpose | Support |
|---|---|---|
| MCP (Model Context Protocol) | Agent ↔ Tool | Native |
| A2A (Agent-to-Agent) | Agent ↔ Agent | Native |
| AG-UI (Agent-to-UI) | Agent ↔ Frontend | Planned |
Make any agent smarter with research-backed prompt engineering techniques:
# Enhance with category-tuned profile
multiagent enhance code/code-reviewer
# Apply all 8 techniques
multiagent enhance code/code-reviewer -p all
# Enhance + export to Claude Code
multiagent enhance code/code-reviewer -p all -t claude-code -o .agents/skills| Enhancement | Effect | Source |
|---|---|---|
reasoning |
+20% task completion | OpenAI SWE-bench |
error_recovery |
5-level retry hierarchy | Anthropic engineering |
verification |
Self-check before output | Claude Code internal |
confidence |
-40-60% hallucination | Academic research |
tool_discipline |
Faster, fewer errors | OpenAI GPT-5.4 guide |
failure_modes |
Avoids 6 anti-patterns | 120+ leaked prompts study |
context_management |
Better long-running tasks | LangChain context engineering |
information_priority |
Facts over guessing | Manus AI / Anthropic |
from multiagent import Catalog, enhance_agent
catalog = Catalog()
agent = catalog.load("code/code-reviewer")
smart_agent = enhance_agent(agent, profile="all") # All 8 techniques appliedAuto-generate Mermaid diagrams for agent teams:
multiagent visualize code/code-reviewer code/test-writer code/security-auditorgraph TD
code_reviewer["Code Reviewer<br/><small>supervisor</small>"]
test_writer["Test Writer<br/><small>worker</small>"]
code_reviewer --> test_writer
security_auditor["Security Auditor<br/><small>worker</small>"]
code_reviewer --> security_auditor
Try different patterns: --pattern sequential, parallel, reflection, handoff, group-chat
Browse agents, test enhancements, compare costs, and build teams visually — all in the browser:
Open Playground (no backend needed — pure static HTML)
- Agent Catalog — Search and filter all 48 agents
- Playground — Test enhance profiles and export formats live
- Cost Calculator — Compare costs across 13 models with monthly estimates
- Composition Visualizer — Build teams and auto-generate Mermaid diagrams
Every agent in the catalog includes cost profiles. Know what you'll spend before you run:
from multiagent import Catalog, CostEstimator
catalog = Catalog()
team = catalog.load_team(["code/code-reviewer", "code/test-writer", "code/refactorer"])
estimate = CostEstimator.estimate(team, input_tokens=5000)
print(estimate)
# CostEstimate(
# model="claude-haiku-4-5", total_usd=0.009, tokens=~8000
# model="claude-sonnet-4-6", total_usd=0.045, tokens=~8000
# model="gpt-4o", total_usd=0.060, tokens=~8000
# )| Example | Pattern | Frameworks | Description |
|---|---|---|---|
| Hello Agents | Single | All | Your first agent in 10 lines |
| Code Review Team | Supervisor/Worker | CrewAI, Claude | Automated PR review pipeline |
| Research Pipeline | Parallel + Sequential | LangGraph | Multi-source research with fact-checking |
| Content Factory | Sequential | CrewAI | Writer → Editor → SEO → Publisher |
| Incident Response | DAG | LangGraph | Automated incident triage and remediation |
graph TB
subgraph "multi-agent"
CAT[Agent Catalog<br/>50+ YAML definitions]
PAT[Pattern Library<br/>8 orchestration patterns]
REC[Recommendation Engine<br/>Task → Agent matching]
COST[Cost Estimator<br/>Per-model pricing]
end
subgraph "Adapters"
CR[CrewAI]
LG[LangGraph]
OA[OpenAI SDK]
CL[Claude SDK]
GA[Google ADK]
SM[smolagents]
end
subgraph "Protocols"
MCP[MCP - Tools]
A2A[A2A - Agent Communication]
AGUI[AG-UI - Frontend]
end
CAT --> CR & LG & OA & CL & GA & SM
PAT --> CR & LG & OA & CL & GA & SM
REC --> CAT
COST --> CAT
CR & LG & OA & CL & GA & SM --> MCP & A2A
- Core catalog format (YAML agent definitions)
- 50+ agent definitions across 6 categories
- 8 orchestration patterns with docs
- CrewAI, LangGraph, OpenAI adapters
- Claude SDK, Google ADK adapters
- Cost estimation engine
- CLI tool (
multiagent search,multiagent info,multiagent compose) - Agent evaluation framework (benchmarks per pattern)
- Visual agent composer (web UI)
- Shared team memory integration
- Agent marketplace (community submissions)
- AG-UI protocol support
- Runtime agent governance / permission system
We love contributions! See CONTRIBUTING.md for details.
Ways to contribute:
- Add an agent — Submit a new YAML agent definition to the catalog
- Add a pattern — Document a new orchestration pattern with examples
- Add an adapter — Create a framework adapter
- Improve docs — Better examples, tutorials, translations
- Report bugs — File issues with reproduction steps
MIT License — see LICENSE for details.
If this helps you build better agents, a star would mean a lot.