Skip to content

Done-0/unitor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unitor plugin for Claude Code

Multi-AI collaboration for complex software projects.

This plugin orchestrates Codex, Gemini, and Claude to work together as a team - discussing requirements, negotiating API contracts, and implementing complete full-stack applications autonomously.

What You Get

  • /unitor:collab for multi-AI collaboration on complex tasks
  • /unitor:route for intelligent single-domain task routing
  • /unitor:config and /unitor:status for team management

Requirements

  • Claude Code
  • Node.js 18.18 or later
  • Gemini CLI (optional, for frontend collaboration)
  • Codex CLI (optional, for backend collaboration)

Note

Unitor works with just Claude. Gemini and Codex are optional but highly recommended for multi-AI collaboration. Without them, all tasks route to Claude.

Install

Add the marketplace in Claude Code:

/plugin marketplace add Done-0/unitor

Install the plugin:

/plugin install unitor@Done-0

Reload plugins:

/reload-plugins

The plugin is now ready. By default, Gemini and Codex are enabled but not required.

Setting Up Gemini (Optional)

If you want Gemini to handle frontend tasks:

npm install -g @google/gemini-cli
gemini  # First run for authorization

Setting Up Codex (Optional)

If you want Codex to handle backend tasks:

npm install -g @openai/codex
codex login

Check provider status:

/unitor:status

You should see which providers are available. If Gemini or Codex are not installed, they will show as unavailable and tasks will route to Claude instead.

Setting Up Statusline (Optional)

Enable real-time statusline to show AI team activity:

/unitor:setup

The statusline shows:

  • Provider status (enabled/disabled)
  • Active collaborations with participants, phase, and discussion preview
  • Recent task activity when no active sessions

To disable:

claude config unset statusline.command

How It Works

Multi-AI Collaboration

When you run /unitor:collab, Claude (the coordinator) analyzes your task and orchestrates AI specialists:

  1. Coordinator analyzes task - Claude understands what needs to be built
  2. Defines specialist roles - Creates specific role descriptions for each part
  3. Routes to providers - Assigns roles to Codex, Gemini, or Claude based on expertise
  4. Round-table discussion - AIs discuss requirements until all participants contribute and reach understanding (dynamic rounds based on task complexity)
  5. Autonomous implementation - Each AI implements their part in the order defined by coordinator
  6. Basic verification - System confirms files were created (coordinator reviews quality)

Example:

/unitor:collab "build user authentication with JWT"

What happens:

  • Claude analyzes: needs auth API, login UI, user database
  • Claude defines roles:
    • "JWT auth API - implement /login, /register, /refresh with token generation and validation"
    • "React login UI - build forms with validation and error handling"
    • "User database - design users table with password hashing"
  • Codex handles auth API
  • Gemini handles login UI
  • Codex handles database
  • AIs discuss and implement
  • System verifies integration

Result: Complete working application with all components integrated.

Single-Domain Routing

For single-domain tasks, /unitor:route picks the best specialist:

  • Coordinator (Claude) sees each provider's capabilities (tags)
  • Analyzes the task requirements
  • Directly decides which provider is the best match
  • Routes to that provider for execution

Example: "fix button styling" → coordinator sees gemini has frontend-ui, css expertise → routes to Gemini

Routing is based on coordinator's analysis of provider capabilities, not hardcoded keywords.

Usage

/unitor:collab

Orchestrate multiple AIs to collaborate on complex tasks.

Use it when you want:

  • Multiple AIs to work together on a full-stack feature
  • Backend and frontend implemented in one go
  • Real AI discussion and negotiation
  • Different perspectives on design, architecture, or content

Basic usage:

/unitor:collab "build user authentication: React login form + Express JWT API"
/unitor:collab "review and improve this API design"

With custom models:

# Specify individual models
/unitor:collab --claude=claude-opus-4-7 --codex=gpt-5.4 "complex architecture task"

# Compact format
/unitor:collab --models=claude:opus-4-7,codex:gpt-5.4,gemini:pro "task description"

Without model flags, uses default models from configuration.

Note

Collaboration takes 5-8 minutes. AIs discuss, negotiate, implement, and verify. This is real work, not instant generation.

/unitor:route

Route a single-domain task to the best specialist.

Use it when you want:

  • A frontend task handled by Gemini
  • A backend task handled by Codex
  • Quick routing without collaboration overhead

Examples:

/unitor:route "fix the login button styling"
/unitor:route "implement user authentication API"
/unitor:route "refactor authentication architecture"

/unitor:config

Manage your AI team configuration.

View current setup:

/unitor:config --show

Configure models:

/unitor:config --set-model gemini gemini-2.0-flash-exp
/unitor:config --set-model codex gpt-5.4

Enable/disable providers:

/unitor:config --enable gemini
/unitor:config --disable codex

/unitor:status

Check provider health and recent tasks.

/unitor:status
/unitor:status --json

Typical Flows

Multi-Domain Feature

/unitor:collab "build user profile page with API and React UI"

Single-Domain Task

/unitor:route "add search box to navigation"

Check Team Status

/unitor:status

Supported AI Providers

Provider Best For Default Model
Claude Architecture, security, orchestration claude-sonnet-4-6
Gemini Frontend UI, CSS, React/Vue gemini-flash-latest
Codex Backend API, database, Python/Go gpt-5.4

Production Features

  • Real AI collaboration - Calls actual Codex and Gemini CLIs, not simulated
  • Autonomous consensus - AIs discuss until all participants contribute (dynamic rounds)
  • Universal file detection - Detects all file types (any language, any extension)
  • Basic verification - Confirms files created, coordinator reviews quality
  • Retry logic - 2 retries with exponential backoff for transient errors
  • Timeout protection - 300s default, configurable per provider
  • Cost protection - Max 50 provider calls per collaboration
  • Graceful degradation - Continues if one AI fails

FAQ

Do I need Gemini and Codex installed?

For routing (/unitor:route): No. Without them, all tasks route to Claude.

For collaboration (/unitor:collab): Highly recommended. Multi-AI collaboration needs at least 2 different providers. With only Claude, you lose the collaboration benefit.

How does multi-AI collaboration work?

Unitor spawns real CLI processes:

  • Codex: codex exec "<prompt>"
  • Gemini: gemini --prompt "<prompt>"

Each AI receives full conversation history and responds naturally. The system detects consensus by analyzing responses for agreement signals and unresolved questions.

This is not simulated - it's real AI-to-AI communication.

Why does collaboration take several minutes?

Real AI collaboration involves:

  • Discussion rounds (30-60s per AI per round)
  • File creation (2-5 minutes for complete projects)
  • Verification (reading and validating files)

A typical 3-round collaboration takes 5-8 minutes. This is production-grade work.

What if an AI times out?

The system retries twice. If both fail:

  • Marks the AI as temporarily unavailable
  • Continues with remaining AIs
  • Other AIs can still complete their parts

Can I see the conversation history?

Yes. The collaboration output shows all rounds with full AI responses.

How do I install Gemini CLI?

npm install -g @google/gemini-cli
gemini  # First run for authorization

How do I install Codex CLI?

npm install -g @openai/codex
codex login

Will it use my existing CLI config?

Yes. Unitor uses your local CLI installations and picks up existing authentication and configuration.

Can I use different models?

Yes:

/unitor:config --set-model gemini gemini-2.0-flash-exp
/unitor:config --set-model codex gpt-5.4

What happens if a provider fails?

For routing: Retries, then falls back to Claude.

For collaboration: Retries twice, then marks as unavailable and continues with remaining AIs.

How much does collaboration cost?

Depends on your provider pricing:

  • Codex (OpenAI): ~$0.01-0.05 per collaboration
  • Gemini (Google): Often has free tier
  • Cost protection: Max 50 provider calls per collaboration

Typical 3-round collaboration: 6-10 API calls total.

License

MIT

About

Multi-AI collaboration for complex software projects. This plugin orchestrates Codex, Gemini, and Claude to work together as a team - discussing requirements, negotiating API contracts, and implementing complete full-stack applications autonomously.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors