A sophisticated multi-agent system designed for Claude Code to manage complex development workflows, particularly optimized for GPU acceleration projects like OpenFOAM.
- Clone this repository or copy the agent files to your project:
# If in existing project, copy the agents directory
cp -r agents/ your-project/
cp -r .claude/ your-project/- Run the setup script:
python setup_agents.py- Test the installation:
python setup_agents.py --testpython orchestrator.py --interactivepython orchestrator.py --workflow "Implement GPU acceleration for k-epsilon model"python orchestrator.py --statusThe system consists of four specialized agents, each with distinct responsibilities:
┌──────────────┐
│ Manager │ ← Coordinates workflow
└──────┬───────┘
│
┌───┴────┬────────┬──────────┐
▼ ▼ ▼ ▼
┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐
│Planner│ │Actor │ │Testor│ │Logger│
└──────┘ └──────┘ └──────┘ └──────┘
- Purpose: Workflow coordination and task delegation
- Responsibilities:
- Decompose high-level tasks into subtasks
- Assign tasks to appropriate agents
- Track progress and manage state
- Coordinate multi-agent workflows
- Output:
doc/manager/todos.md,workflow_status.json
- Purpose: Create detailed plans and strategies
- Responsibilities:
- Analyze requirements and constraints
- Create step-by-step implementation plans
- Design test strategies
- Identify dependencies and risks
- Output:
doc/planner/current_plan.md,test_strategy.md
- Purpose: Strict execution of detailed plans
- Responsibilities:
- Validate plan executability
- Execute commands exactly as specified
- Implement code according to specifications
- Document all actions taken
- Output:
doc/actor/execution_details.md,implementation_log.md
- Purpose: Rigorous testing and validation
- Responsibilities:
- Execute test plans
- Run performance benchmarks
- Compare results (CPU vs GPU)
- Generate test reports
- Output:
doc/testor/test_report.md,performance_metrics.json
Use these commands in Claude Code or via the orchestrator:
/manager start-workflow "description" [priority=high] [category=gpu]
/manager status
/manager assign <task_id> to <agent>
/planner create-plan "description"
/planner optimize-plan <plan_id>
/planner test-strategy "target"
/actor execute <plan_id>
/actor validate <plan_id>
/testor run <test_id>
/testor benchmark <target>
project/
├── agents/ # Agent implementation
│ ├── __init__.py
│ ├── base.py # Base agent class
│ ├── state_manager.py # State persistence
│ ├── command_router.py # Command routing
│ ├── logger.py # Logging utility
│ ├── manager/ # Manager agent
│ ├── planner/ # Planner agent
│ ├── actor/ # Actor agent
│ └── testor/ # Testor agent
├── .claude/ # Claude Code configuration
│ ├── commands/ # Slash command definitions
│ └── config.json # System configuration
├── .agent-state/ # Runtime state (gitignored)
│ ├── current_workflow.json
│ ├── task_queue.json
│ └── session_logs/
├── doc/ # Agent documentation
│ ├── manager/ # Manager outputs
│ ├── planner/ # Planner outputs
│ ├── actor/ # Actor outputs
│ └── testor/ # Testor outputs
├── orchestrator.py # Main orchestrator
├── setup_agents.py # Setup script
└── README.md # This file
Here's a complete workflow for implementing GPU acceleration:
/manager "Implement GPU-accelerated k-epsilon turbulence model" priority=high- Decomposes task into subtasks
- Creates TODO list in
doc/manager/todos.md - Assigns planning task to Planner
- Analyzes requirements
- Creates implementation steps
- Generates test strategy
- Outputs to
doc/planner/current_plan.md
- Reviews plan completeness
- Assigns execution to Actor
- Updates workflow status
- Validates plan is executable
- Executes commands step by step
- Documents actions in
doc/actor/execution_details.md
- Runs test plan from Planner
- Compares CPU vs GPU results
- Generates report in
doc/testor/test_report.md
- Reviews test results
- Updates workflow status
- Documents completion
Edit .claude/config.json to customize:
{
"agents": {
"manager": {
"auto_decompose": true,
"max_subtasks": 10
},
"actor": {
"safe_mode": true,
"require_validation": true
},
"testor": {
"default_iterations": 3,
"performance_tracking": true
}
}
}The system maintains state across sessions:
- Current workflow status
- Task queue
- Agent execution history
- Performance metrics
- Automatic task retry on failure
- Graceful degradation
- Comprehensive error logging
- Rollback capabilities
- Execution time metrics
- Resource usage monitoring
- Agent performance statistics
- Workflow optimization suggestions
-
Import Errors
# Ensure Python path includes project root export PYTHONPATH=$PYTHONPATH:$(pwd)
-
Permission Errors
# Make scripts executable chmod +x orchestrator.py setup_agents.py -
State Corruption
# Reset system state python setup_agents.py --reset
Enable detailed logging:
# In .claude/config.json
"workflow": {
"logging_level": "DEBUG"
}- Commit the agent system:
git add agents/ .claude/ orchestrator.py setup_agents.py
git commit -m "Add multi-agent workflow system"
git push- On new machine:
git clone <repository>
cd <project>
python setup_agents.pyFor projects using Docker (like OpenFOAM):
# Agents automatically use Docker commands
/actor execute docker_build_planThe system tracks:
- Workflow completion rates
- Agent execution counts
- Task success/failure rates
- Performance benchmarks
- Time-to-completion metrics
Access metrics:
python orchestrator.py --status | jq '.metrics'- Create new agent class inheriting from
BaseAgent - Implement required methods:
execute(),validate_task() - Add to orchestrator agent registry
- Create command patterns in
command_router.py - Add slash command documentation
- Custom workflow templates in
agents/*/templates/ - New command patterns in
CommandRouter - Additional state tracking in
StateManager - Enhanced logging in
AgentLogger
This multi-agent system is designed for use with Claude Code and is provided as-is for educational and development purposes.
Designed specifically for Claude Code to enhance development workflow management, particularly for complex projects like GPU acceleration of computational fluid dynamics software.
For issues or questions:
- Check the troubleshooting section
- Review agent logs in
.agent-state/session_logs/ - Run system test:
python setup_agents.py --test - Create an issue with detailed logs and configuration