Skip to content

Hovborg/multi-agent

Repository files navigation

简体中文日本語한국어EspañolDeutschDansk

multi-agent banner

multi-agent

The definitive catalog of AI agent patterns. One definition, any framework.

License: MIT PyPI GitHub Stars CI Discord

Quick StartAgent CatalogSmart EnhancementsPatternsPlaygroundDocsContributing


50+ battle-tested agent definitions. 11 categories. 8 orchestration patterns. 6 framework adapters. 5 export targets. Zero lock-in.

multi-agent is a framework-agnostic catalog of production-ready AI agent patterns. Define your agents once in YAML, run them on CrewAI, LangGraph, OpenAI Agents SDK, Claude SDK, Google ADK, or smolagents.

"57% of enterprise agent failures are orchestration failures, not model failures." — Anthropic, 2026

Stop reinventing agents. Start composing them.

Why multi-agent?

multi-agent CrewAI LangGraph OpenAI SDK Claude SDK
Framework-agnostic definitions Yes No No No No
Export to any AI platform Yes No No No No
Agent catalog with 50+ roles Yes ~10 ~5 ~3 ~5
Pattern library (8 patterns) Yes 2 3 2 2
Built-in cost estimation Yes No No No No
Agent recommendation engine Yes No No No No
Works with any LLM Yes Yes Yes OpenAI only Claude only
MCP native Yes Partial Adapter Yes Yes
Lines of core code ~600 18K 25K 8K 12K

Quick Start

pip install multi-agent

1. Browse the catalog

multiagent search "code review"
Found 3 agents matching "code review":

  code/code-reviewer     Review PRs for bugs, style, and security
  code/test-writer       Generate tests for changed code
  code/refactorer        Suggest and apply refactoring improvements

Recommended pattern: supervisor-worker (1 reviewer + N specialists)
Estimated cost: ~$0.03/review (Claude Haiku) to ~$0.25/review (GPT-4o)

2. Use an agent definition

from multiagent import Catalog, patterns

# Load agents from the catalog
catalog = Catalog()
reviewer = catalog.load("code/code-reviewer")
test_writer = catalog.load("code/test-writer")

# Compose with a pattern
team = patterns.supervisor_worker(
    supervisor=reviewer,
    workers=[test_writer],
    model="claude-sonnet-4-6"  # or any model
)

result = team.run("Review this PR and write missing tests", context={
    "diff": open("changes.diff").read()
})

3. Or use with your favorite framework

# CrewAI adapter
from multiagent.adapters import crewai
crew = crewai.from_catalog(["code/code-reviewer", "code/test-writer"])
result = crew.kickoff()

# LangGraph adapter
from multiagent.adapters import langgraph
graph = langgraph.from_catalog(["research/deep-researcher", "research/fact-checker"])
result = graph.invoke({"query": "Latest AI agent frameworks"})

# OpenAI Agents SDK adapter
from multiagent.adapters import openai_sdk
agent = openai_sdk.from_catalog("code/code-reviewer")
result = agent.run("Review this code")

4. Export to any AI platform

# Export for Claude Code (.agents/ skills)
multiagent export code/code-reviewer claude-code -o .agents/skills

# Export for Codex / OpenClaw (AGENTS.md format)
multiagent export code/code-reviewer codex

# Export for Google Gemini / ADK
multiagent export code/code-reviewer gemini -o ./adk-agents

# Export for ChatGPT (Custom GPT instructions)
multiagent export code/code-reviewer chatgpt

# Export just the system prompt (works with ANY LLM)
multiagent export code/code-reviewer raw

# Bulk export all agents in a category
multiagent export-all claude-code -o .agents/skills -c code
Target Format Works With
claude-code .md skill files Claude Code, Claude Desktop
codex AGENTS.md sections OpenAI Codex, OpenClaw
gemini ADK YAML config Google Gemini, Vertex AI
chatgpt System instructions ChatGPT, Custom GPTs
raw Plain system prompt Any LLM — Ollama, LM Studio, llama.cpp, vLLM, etc.

Agent Catalog

Every agent is defined in a simple, readable YAML format:

# catalog/code/code-reviewer.yaml
name: code-reviewer
version: "1.0"
description: Reviews code changes for bugs, security issues, and style violations
category: code

system_prompt: |
  You are an expert code reviewer. Analyze the provided code changes and identify:
  1. Bugs and logic errors
  2. Security vulnerabilities (OWASP Top 10)
  3. Performance issues
  4. Style and readability improvements
  
  Be specific. Reference line numbers. Suggest fixes.

tools:
  - type: mcp
    server: filesystem
  - type: function
    name: search_codebase
    description: Search for related code in the repository

parameters:
  temperature: 0.1
  max_tokens: 4096

cost_profile:
  tokens_per_run: ~2000
  recommended_model: claude-haiku-4-5  # Best cost/quality for reviews
  estimated_cost_usd: 0.003

works_with:
  - code/test-writer      # Generate tests for flagged code
  - code/refactorer        # Apply suggested improvements
  
recommended_patterns:
  - supervisor-worker      # Reviewer supervises specialist agents
  - sequential             # Review → Test → Refactor pipeline

Full Catalog

Category Agents Description
code/ code-reviewer code-generator test-writer refactorer debugger security-auditor documentation-writer pr-summarizer Software development lifecycle
research/ deep-researcher web-scraper fact-checker paper-analyst competitive-intel Research and analysis
data/ data-analyst sql-generator report-writer Data engineering and analysis
devops/ ci-cd-agent infra-provisioner monitoring-agent incident-responder Infrastructure and operations
content/ writer editor translator seo-optimizer Content creation pipeline
finance/ trading-analyst portfolio-optimizer financial-reporter fraud-detector tax-advisor Financial analysis and compliance
support/ customer-support ticket-router knowledge-base-builder escalation-agent Customer service pipeline
legal/ contract-reviewer legal-researcher compliance-checker document-drafter Legal and compliance
personal/ email-assistant meeting-scheduler note-taker task-manager Personal productivity
security/ vulnerability-scanner log-analyzer access-reviewer incident-analyst Security operations
orchestration/ task-router cost-optimizer quality-gate Meta-agents for coordination

Patterns

Eight battle-tested orchestration patterns, each with runnable examples:

Pattern Overview

                    ┌─────────────┐
                    │  Supervisor  │ ── Pattern 1: Supervisor/Worker
                    └──────┬──────┘    Central agent delegates to specialists
                     ┌─────┼─────┐
                     ▼     ▼     ▼
                   [W1]  [W2]  [W3]

    [A] → [B] → [C]                ── Pattern 2: Sequential Pipeline
                                       Linear chain of specialized agents

    ┌──→ [A] ──┐
    │          │
    ├──→ [B] ──┼──→ [Merge]        ── Pattern 3: Parallel Fan-Out
    │          │                       Independent tasks run concurrently
    └──→ [C] ──┘

    [A] ←──→ [B]                   ── Pattern 4: Reflection/Loop
                                       Iterative refinement between agents

    [A] ──handoff──→ [B] ──handoff──→ [C]  ── Pattern 5: Handoff
                                              Agent transfers full control

    ┌─[A]─┐                        ── Pattern 6: Group Chat
    │ [B] │ ← Selector                Shared conversation, dynamic speaker
    └─[C]─┘

    [A]─┬─[B]                      ── Pattern 7: DAG (Directed Acyclic Graph)
        └─[C]──[D]                    Conditional branching and merging

    ┌Worktree1: [A]┐               ── Pattern 8: Split-and-Merge
    │Worktree2: [B]│→ Git Merge       Isolated parallel work, merged at end
    └Worktree3: [C]┘
Pattern When to Use Complexity Latency Example
Supervisor/Worker Complex tasks with clear subtasks Medium Medium Code review team
Sequential Step-by-step processing Low High Content pipeline
Parallel Independent tasks Low Low Multi-source research
Reflection Quality-critical output Medium Medium Legal document drafting
Handoff Escalation and routing Low Low Customer support tiers
Group Chat Brainstorming, debate High High Design review
DAG Complex conditional workflows High Variable CI/CD pipeline
Split-and-Merge Large parallel code changes Medium Low Multi-file refactoring

Frameworks

multi-agent provides first-class adapters for the top frameworks:

Framework Adapter Status Stars
CrewAI multiagent.adapters.crewai Stable 44K
LangGraph multiagent.adapters.langgraph Stable 25K
OpenAI Agents SDK multiagent.adapters.openai_sdk Stable 21K
Claude Agent SDK multiagent.adapters.claude_sdk Stable
Google ADK multiagent.adapters.google_adk Beta 18K
smolagents multiagent.adapters.smolagents Beta 26K

Don't see your framework? Submit an adapter.

Protocol Support

Built for the 2026 protocol stack:

Protocol Purpose Support
MCP (Model Context Protocol) Agent ↔ Tool Native
A2A (Agent-to-Agent) Agent ↔ Agent Native
AG-UI (Agent-to-UI) Agent ↔ Frontend Planned

Smart Enhancements

Make any agent smarter with research-backed prompt engineering techniques:

# Enhance with category-tuned profile
multiagent enhance code/code-reviewer

# Apply all 8 techniques
multiagent enhance code/code-reviewer -p all

# Enhance + export to Claude Code
multiagent enhance code/code-reviewer -p all -t claude-code -o .agents/skills
Enhancement Effect Source
reasoning +20% task completion OpenAI SWE-bench
error_recovery 5-level retry hierarchy Anthropic engineering
verification Self-check before output Claude Code internal
confidence -40-60% hallucination Academic research
tool_discipline Faster, fewer errors OpenAI GPT-5.4 guide
failure_modes Avoids 6 anti-patterns 120+ leaked prompts study
context_management Better long-running tasks LangChain context engineering
information_priority Facts over guessing Manus AI / Anthropic
from multiagent import Catalog, enhance_agent

catalog = Catalog()
agent = catalog.load("code/code-reviewer")
smart_agent = enhance_agent(agent, profile="all")  # All 8 techniques applied

Agent Composition Visualizer

Auto-generate Mermaid diagrams for agent teams:

multiagent visualize code/code-reviewer code/test-writer code/security-auditor
graph TD
    code_reviewer["Code Reviewer<br/><small>supervisor</small>"]
    test_writer["Test Writer<br/><small>worker</small>"]
    code_reviewer --> test_writer
    security_auditor["Security Auditor<br/><small>worker</small>"]
    code_reviewer --> security_auditor
Loading

Try different patterns: --pattern sequential, parallel, reflection, handoff, group-chat

Interactive Playground

Browse agents, test enhancements, compare costs, and build teams visually — all in the browser:

Open Playground (no backend needed — pure static HTML)

  • Agent Catalog — Search and filter all 48 agents
  • Playground — Test enhance profiles and export formats live
  • Cost Calculator — Compare costs across 13 models with monthly estimates
  • Composition Visualizer — Build teams and auto-generate Mermaid diagrams

Cost Estimation

Every agent in the catalog includes cost profiles. Know what you'll spend before you run:

from multiagent import Catalog, CostEstimator

catalog = Catalog()
team = catalog.load_team(["code/code-reviewer", "code/test-writer", "code/refactorer"])

estimate = CostEstimator.estimate(team, input_tokens=5000)
print(estimate)
# CostEstimate(
#   model="claude-haiku-4-5",   total_usd=0.009,  tokens=~8000
#   model="claude-sonnet-4-6",  total_usd=0.045,  tokens=~8000  
#   model="gpt-4o",             total_usd=0.060,  tokens=~8000
# )

Examples

Example Pattern Frameworks Description
Hello Agents Single All Your first agent in 10 lines
Code Review Team Supervisor/Worker CrewAI, Claude Automated PR review pipeline
Research Pipeline Parallel + Sequential LangGraph Multi-source research with fact-checking
Content Factory Sequential CrewAI Writer → Editor → SEO → Publisher
Incident Response DAG LangGraph Automated incident triage and remediation

Architecture

graph TB
    subgraph "multi-agent"
        CAT[Agent Catalog<br/>50+ YAML definitions]
        PAT[Pattern Library<br/>8 orchestration patterns]
        REC[Recommendation Engine<br/>Task → Agent matching]
        COST[Cost Estimator<br/>Per-model pricing]
    end

    subgraph "Adapters"
        CR[CrewAI]
        LG[LangGraph]
        OA[OpenAI SDK]
        CL[Claude SDK]
        GA[Google ADK]
        SM[smolagents]
    end

    subgraph "Protocols"
        MCP[MCP - Tools]
        A2A[A2A - Agent Communication]
        AGUI[AG-UI - Frontend]
    end

    CAT --> CR & LG & OA & CL & GA & SM
    PAT --> CR & LG & OA & CL & GA & SM
    REC --> CAT
    COST --> CAT
    CR & LG & OA & CL & GA & SM --> MCP & A2A
Loading

Roadmap

  • Core catalog format (YAML agent definitions)
  • 50+ agent definitions across 6 categories
  • 8 orchestration patterns with docs
  • CrewAI, LangGraph, OpenAI adapters
  • Claude SDK, Google ADK adapters
  • Cost estimation engine
  • CLI tool (multiagent search, multiagent info, multiagent compose)
  • Agent evaluation framework (benchmarks per pattern)
  • Visual agent composer (web UI)
  • Shared team memory integration
  • Agent marketplace (community submissions)
  • AG-UI protocol support
  • Runtime agent governance / permission system

Contributing

We love contributions! See CONTRIBUTING.md for details.

Ways to contribute:

  • Add an agent — Submit a new YAML agent definition to the catalog
  • Add a pattern — Document a new orchestration pattern with examples
  • Add an adapter — Create a framework adapter
  • Improve docs — Better examples, tutorials, translations
  • Report bugs — File issues with reproduction steps

Star History

Star History Chart

License

MIT License — see LICENSE for details.


If this helps you build better agents, a star would mean a lot.

Releases

No releases published

Packages

 
 
 

Contributors