Multi-AI collaboration for complex software projects.
This plugin orchestrates Codex, Gemini, and Claude to work together as a team - discussing requirements, negotiating API contracts, and implementing complete full-stack applications autonomously.
/unitor:collabfor multi-AI collaboration on complex tasks/unitor:routefor intelligent single-domain task routing/unitor:configand/unitor:statusfor team management
- Claude Code
- Node.js 18.18 or later
- Gemini CLI (optional, for frontend collaboration)
- Codex CLI (optional, for backend collaboration)
Note
Unitor works with just Claude. Gemini and Codex are optional but highly recommended for multi-AI collaboration. Without them, all tasks route to Claude.
Add the marketplace in Claude Code:
/plugin marketplace add Done-0/unitorInstall the plugin:
/plugin install unitor@Done-0Reload plugins:
/reload-pluginsThe plugin is now ready. By default, Gemini and Codex are enabled but not required.
If you want Gemini to handle frontend tasks:
npm install -g @google/gemini-cli
gemini # First run for authorizationIf you want Codex to handle backend tasks:
npm install -g @openai/codex
codex loginCheck provider status:
/unitor:statusYou should see which providers are available. If Gemini or Codex are not installed, they will show as unavailable and tasks will route to Claude instead.
Enable real-time statusline to show AI team activity:
/unitor:setupThe statusline shows:
- Provider status (enabled/disabled)
- Active collaborations with participants, phase, and discussion preview
- Recent task activity when no active sessions
To disable:
claude config unset statusline.commandWhen you run /unitor:collab, Claude (the coordinator) analyzes your task and orchestrates AI specialists:
- Coordinator analyzes task - Claude understands what needs to be built
- Defines specialist roles - Creates specific role descriptions for each part
- Routes to providers - Assigns roles to Codex, Gemini, or Claude based on expertise
- Round-table discussion - AIs discuss requirements until all participants contribute and reach understanding (dynamic rounds based on task complexity)
- Autonomous implementation - Each AI implements their part in the order defined by coordinator
- Basic verification - System confirms files were created (coordinator reviews quality)
Example:
/unitor:collab "build user authentication with JWT"What happens:
- Claude analyzes: needs auth API, login UI, user database
- Claude defines roles:
- "JWT auth API - implement /login, /register, /refresh with token generation and validation"
- "React login UI - build forms with validation and error handling"
- "User database - design users table with password hashing"
- Codex handles auth API
- Gemini handles login UI
- Codex handles database
- AIs discuss and implement
- System verifies integration
Result: Complete working application with all components integrated.
For single-domain tasks, /unitor:route picks the best specialist:
- Coordinator (Claude) sees each provider's capabilities (tags)
- Analyzes the task requirements
- Directly decides which provider is the best match
- Routes to that provider for execution
Example: "fix button styling" → coordinator sees gemini has frontend-ui, css expertise → routes to Gemini
Routing is based on coordinator's analysis of provider capabilities, not hardcoded keywords.
Orchestrate multiple AIs to collaborate on complex tasks.
Use it when you want:
- Multiple AIs to work together on a full-stack feature
- Backend and frontend implemented in one go
- Real AI discussion and negotiation
- Different perspectives on design, architecture, or content
Basic usage:
/unitor:collab "build user authentication: React login form + Express JWT API"
/unitor:collab "review and improve this API design"With custom models:
# Specify individual models
/unitor:collab --claude=claude-opus-4-7 --codex=gpt-5.4 "complex architecture task"
# Compact format
/unitor:collab --models=claude:opus-4-7,codex:gpt-5.4,gemini:pro "task description"Without model flags, uses default models from configuration.
Note
Collaboration takes 5-8 minutes. AIs discuss, negotiate, implement, and verify. This is real work, not instant generation.
Route a single-domain task to the best specialist.
Use it when you want:
- A frontend task handled by Gemini
- A backend task handled by Codex
- Quick routing without collaboration overhead
Examples:
/unitor:route "fix the login button styling"
/unitor:route "implement user authentication API"
/unitor:route "refactor authentication architecture"Manage your AI team configuration.
View current setup:
/unitor:config --showConfigure models:
/unitor:config --set-model gemini gemini-2.0-flash-exp
/unitor:config --set-model codex gpt-5.4Enable/disable providers:
/unitor:config --enable gemini
/unitor:config --disable codexCheck provider health and recent tasks.
/unitor:status
/unitor:status --json/unitor:collab "build user profile page with API and React UI"/unitor:route "add search box to navigation"/unitor:status| Provider | Best For | Default Model |
|---|---|---|
| Claude | Architecture, security, orchestration | claude-sonnet-4-6 |
| Gemini | Frontend UI, CSS, React/Vue | gemini-flash-latest |
| Codex | Backend API, database, Python/Go | gpt-5.4 |
- Real AI collaboration - Calls actual Codex and Gemini CLIs, not simulated
- Autonomous consensus - AIs discuss until all participants contribute (dynamic rounds)
- Universal file detection - Detects all file types (any language, any extension)
- Basic verification - Confirms files created, coordinator reviews quality
- Retry logic - 2 retries with exponential backoff for transient errors
- Timeout protection - 300s default, configurable per provider
- Cost protection - Max 50 provider calls per collaboration
- Graceful degradation - Continues if one AI fails
For routing (/unitor:route): No. Without them, all tasks route to Claude.
For collaboration (/unitor:collab): Highly recommended. Multi-AI collaboration needs at least 2 different providers. With only Claude, you lose the collaboration benefit.
Unitor spawns real CLI processes:
- Codex:
codex exec "<prompt>" - Gemini:
gemini --prompt "<prompt>"
Each AI receives full conversation history and responds naturally. The system detects consensus by analyzing responses for agreement signals and unresolved questions.
This is not simulated - it's real AI-to-AI communication.
Real AI collaboration involves:
- Discussion rounds (30-60s per AI per round)
- File creation (2-5 minutes for complete projects)
- Verification (reading and validating files)
A typical 3-round collaboration takes 5-8 minutes. This is production-grade work.
The system retries twice. If both fail:
- Marks the AI as temporarily unavailable
- Continues with remaining AIs
- Other AIs can still complete their parts
Yes. The collaboration output shows all rounds with full AI responses.
npm install -g @google/gemini-cli
gemini # First run for authorizationnpm install -g @openai/codex
codex loginYes. Unitor uses your local CLI installations and picks up existing authentication and configuration.
Yes:
/unitor:config --set-model gemini gemini-2.0-flash-exp
/unitor:config --set-model codex gpt-5.4For routing: Retries, then falls back to Claude.
For collaboration: Retries twice, then marks as unavailable and continues with remaining AIs.
Depends on your provider pricing:
- Codex (OpenAI): ~$0.01-0.05 per collaboration
- Gemini (Google): Often has free tier
- Cost protection: Max 50 provider calls per collaboration
Typical 3-round collaboration: 6-10 API calls total.
MIT