A Discord-like group chat for your OpenClaw agents.
Connect two or more OpenClaw agents and watch them debate, discuss, and work in parallel via @mentions. You can intervene at any point.
npm install -g openswarm-cli
openswarm init
openswarmyou > Research AI agent trends and build a demo
β Master: Let me get my team on this.
@researcher look into the latest AI agent frameworks
@coder build a quick demo CLI
β Researcher: thinking... β Coder: thinking...
β Researcher: Here's what I found...
β Coder: Here's an implementation...
β Master: Based on the team's findings, here's the full picture...
Each agent is a full OpenClaw instance with its own personality (SOUL.md), tools, and skills. Whatever model or provider you've configured in OpenClaw β Gemini, OpenAI, Anthropic, Ollama, or anything else β OpenSwarm uses it automatically.
npm install -g openswarm-cliRequires Node.js 20+ and OpenClaw installed (npm install -g openclaw).
mkdir my-swarm && cd my-swarm
openswarm initThe wizard asks how many agents, their names, colors, and one-sentence roles. No model or API key questions β that's OpenClaw's domain.
Creates:
agents/
master/
openclaw.json # Gateway config
workspace/
SOUL.md # Personality + @mention instructions
AGENTS.md # Workspace rules
researcher/
openclaw.json
workspace/
SOUL.md
AGENTS.md
coder/
openclaw.json
workspace/
SOUL.md
AGENTS.md
swarm.config.json # Workspace paths + labels + colors
Each agent needs a model configured via OpenClaw:
# Option A: Log in with your OpenClaw account
cd agents/master && openclaw login
# Option B: Edit openclaw.json directly
# Add to agents/master/openclaw.json:
# "agents": { "defaults": { "model": { "primary": "openai/gpt-4o" } } }
# Option C: Use any provider β Gemini, Anthropic, Ollama, etc.
# See OpenClaw docs for provider configurationopenswarmOpenSwarm spawns an OpenClaw gateway for each agent, waits for them to start, then drops you into a group chat REPL.
You type a message. The master agent reads it and decides which specialists to @mention. Mentioned agents run in parallel, respond, and the master synthesizes everything into a final answer.
You
βββ Master (streams live to your terminal)
βββ @researcher (runs in parallel) ββ responds
βββ @coder (runs in parallel) ββ responds
βββ @researcher (nested!) ββ responds
Master receives all results β final answer
- Master streams live β you see tokens as they arrive
- Specialists run in parallel β a status line shows who's thinking/streaming
- Agents can @mention each other β researcher β coder, coder β analyst, etc.
- Depth limit β prevents infinite @mention loops (default: 3 levels)
- Tool visibility β when agents use tools (web search, exec, etc.), you see live spinners
- Lazy connections β agents only connect when first @mentioned
- Max 20 mentions per message β safety valve against runaway chains
openswarm # Start group chat (spawns agents automatically)
openswarm init # Create a new swarm
openswarm up # Spawn agents in background
openswarm down # Stop background agents
| Flag | Short | Default | Description |
|---|---|---|---|
--config <path> |
-c |
swarm.config.json |
Config file path |
--session <id> |
-s |
β | Resume a saved session |
--verbose |
-v |
false |
Verbose output |
| Command | What it does |
|---|---|
/status |
Connection status for all agents |
/sessions |
List saved sessions with timestamps |
/export |
Export conversation to Markdown file |
/clear |
Clear terminal |
/quit |
Exit (also /exit or Ctrl+C) |
Every conversation auto-saves to ~/.openswarm/sessions/.
# Inside the chat, list past sessions
/sessions
# Resume a session (agents remember the full conversation)
openswarm --session 20260223-abc123
# Export to Markdown
/exportEach agent points to an OpenClaw workspace directory:
{
"agents": {
"master": {
"workspace": "./agents/master",
"label": "Master",
"color": "indigo"
},
"researcher": {
"workspace": "./agents/researcher",
"label": "Researcher",
"color": "green"
},
"coder": {
"workspace": "./agents/coder",
"label": "Coder",
"color": "amber"
}
},
"master": "master"
}No URL, model, token, or systemPrompt needed β OpenClaw handles all of that
inside the workspace via openclaw.json and SOUL.md.
If an agent has url instead of workspace, it works as a direct API connection
without OpenClaw:
{
"agents": {
"quick": {
"url": "https://generativelanguage.googleapis.com/v1beta/openai",
"model": "gemini-2.5-flash",
"label": "Quick",
"color": "cyan"
}
},
"master": "quick"
}You can mix modes β some agents as OpenClaw workspaces, others as direct API.
| Field | Mode | Description |
|---|---|---|
workspace |
OpenClaw | Path to OpenClaw workspace dir |
url |
Direct API | OpenAI-compatible endpoint |
label |
Both | Display name in terminal |
color |
Both | Terminal color (indigo, green, amber, cyan, purple, red, blue, pink) |
model |
Direct API | Model name |
token |
Direct API | Auth token (auto-filled from .env) |
systemPrompt |
Direct API | Custom system prompt |
| Field | Default | Description |
|---|---|---|
master |
required | Which agent coordinates |
maxMentionDepth |
3 |
Max depth of recursive @mention chains |
timeout |
120000 |
Timeout per agent response in ms |
sessionPrefix |
"openswarm" |
Prefix for session file names |
For lightweight use without OpenClaw installed, you can point agents directly at any OpenAI-compatible API:
{
"agents": {
"master": {
"url": "https://generativelanguage.googleapis.com/v1beta/openai",
"label": "Master",
"color": "indigo",
"model": "gemini-2.5-flash"
},
"researcher": {
"url": "http://localhost:11434/v1",
"label": "Researcher",
"color": "green",
"model": "llama3"
}
},
"master": "master"
}Add your API key to .env:
echo GOOGLE_API_KEY=AIza... > .envSupported env vars: GOOGLE_API_KEY, OPENAI_API_KEY, GROQ_API_KEY,
ANTHROPIC_API_KEY, TOGETHER_API_KEY, FIREWORKS_API_KEY, DEEPSEEK_API_KEY,
MISTRAL_API_KEY, OPENCLAW_GATEWAY_TOKEN.
git clone https://github.com/re-marked/openswarm.git
cd openswarm
npm install
# Dev mode (TypeScript, no build step)
npx tsx src/cli.ts
# Build
npm run build
# Type check
npm run type-check2 runtime dependencies: chalk (colors) and ora (spinners). That's it.
MIT