ORBIT is an intelligent multi-agent orchestration framework built using the Actor Model pattern with Thespian. It enables dynamic routing of user queries to specialized AI agents based on intent detection, leveraging Large Language Models (LLMs) for intelligent response generation.
- Features
- Architecture Overview
- Code Flow
- Prerequisites
- Installation
- Running the Application
- Project Structure
- Building Custom Agents
- Model Adapters
- Services
- Message Types
- Chain of Responsibility Pattern
- Contributing
- License
- Contact
- Multi-Agent Architecture: Modular agent system with specialized agents for different tasks
- Intent Detection: Automatic routing of queries to appropriate agents using AI
- Multiple LLM Support: Supports Ollama (Llama), GitHub Copilot, OpenAI, and Claude models
- Actor Model: Built on Thespian for concurrent, distributed agent communication
- Chain of Responsibility: Flexible message handling and validation pipeline
- Repository Analysis: Ingest and analyze GitHub repositories for context-aware responses
- MCP Integration: Model Context Protocol support for external tool integration
┌─────────────────────────────────────────────────────────────────┐
│ User Query │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ OrchestratorAgent │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Chain of Responsibility Validators │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ LLMResponse │──│ Query │──│ Intent │ │ │
│ │ │ Validator │ │ Validator │ │ Validator │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────────┴───────────────────┐
│ QueryValidator routes │
│ QueryMessage to IntentAgent │
▼ │
┌───────────────────────────────────┐ │
│ IntentAgent │ │
│ (Analyzes query & determines │ │
│ which agent should handle) │ │
└───────────────────────────────────┘ │
│ │
│ Returns IntentAgentMessage │
│ (contains target agent name) │
▼ │
┌─────────────────────────────────────────────────────────────────┐
│ OrchestratorAgent │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ IntentAgentMessageValidator identifies sender, │ │
│ │ creates the target agent & forwards the query │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ OrbitAgent │ │Troubleshooting│ │ CustomAgent │
│ │ │ Agent │ │ (Yours) │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
│ LLM Processing │
└───────────────┼───────────────┘
│
▼ Returns LLMMessage
┌─────────────────────────────────────────────────────────────────┐
│ OrchestratorAgent │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ LLMResponseValidator extracts response & sends to user │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────┐
│ User │
│ (Response) │
└──────────────┘
- Initializes the Thespian ActorSystem with capabilities from
capabilities.json - Creates the
OrchestratorAgentas the main entry point - Registers specialized agents in the
AgentRegistry - Prompts user for a query and wraps it in a
QueryMessage
QueryMessage → OrchestratorAgent → MessageTypeResolver (Chain of Responsibility)
QueryMessageValidatorroutesQueryMessagetoIntentAgentIntentAgentuses an LLM to analyze the query and determine the appropriate agent- Returns an
IntentAgentMessagewith the target agent name
IntentAgentMessageValidatorreceives the intent response- Looks up the target agent from
AgentRegistry - Creates and forwards the query to the specialized agent
- The target agent (e.g.,
OrbitAgent,TroubleshootingAgent) processes the query - May fetch repository data using
Repo2TextService - Generates response using configured LLM
- Returns
LLMMessageto orchestrator
LLMResponseValidatorextracts the final response- Response is sent back to the original sender (user)
- Python 3.13+
- Conda (Miniconda or Anaconda)
- Git
- Ollama (for local LLM - optional)
- GitHub Personal Access Token (for repository analysis)
# Create a new conda environment with Python 3.13
conda create -n actorenv python=3.13
# Activate the environment
conda activate actorenvVerify installation:
python --version
# Should output: Python 3.13.x# Navigate to the project directory
cd /path/to/orbit
# Install all required packages
pip install -r requirements.txtKey Dependencies:
| Package | Purpose |
|---|---|
thespian |
Actor model framework for agent communication |
openai |
OpenAI API client |
anthropic |
Claude API client |
ghcopilot |
GitHub Copilot integration |
gitingest |
Repository-to-text conversion |
loguru |
Logging framework |
tiktoken |
Token counting for LLM context management |
python-dotenv |
Environment variable management |
mcp |
Model Context Protocol SDK |
Create a .env file in the project root:
touch .envAdd the following configuration:
# GitHub Personal Access Token (Required for repository analysis)
PAT_TOKEN=your_github_personal_access_token
# OpenAI API Key (Optional - if using OpenAI models)
OPENAI_API_KEY=your_openai_api_key
# Anthropic API Key (Optional - if using Claude models)
ANTHROPIC_API_KEY=your_anthropic_api_key
# Default timeout for operations
DEFAULT_TIMEOUT=300To generate a GitHub PAT:
- Go to GitHub → Settings → Developer Settings → Personal Access Tokens
- Generate a new token with
reposcope - Copy and paste into
.envfile
If using Ollama for local LLM inference:
# Install Ollama (macOS)
brew install ollama
# Start Ollama server
ollama serve
# Pull the Llama model (in a new terminal)
ollama pull llama3The default configuration expects Ollama running at http://127.0.0.1:11434.
# Ensure conda environment is active
conda activate actorenv
# Run the application
python start.pyExpected Output:
___ ____ ____ ___ _____ _ _
/ _ \| _ \| __ )_ _|_ _| || |
| | | | |_) | _ \| | | | | || |_
| |_| | _ <| |_) | | | | |__ _|
\___/|_| \_\____/___| |_| |_|
Welcome...
To ORBIT
Press any key to continue . . .
Enter your query when prompted, and ORBIT will route it to the appropriate agent.
orbit/
├── start.py # Application entry point
├── start_mcp_server.py # MCP server for external integration
├── capabilities.json # ActorSystem configuration
├── requirements.txt # Python dependencies
├── .env # Environment variables (create this)
│
├── src/
│ ├── actor_system/ # Actor system initialization
│ │ └── __init__.py
│ │
│ ├── agent_registry/ # Agent registration and discovery
│ │ ├── __init__.py
│ │ └── register.py
│ │
│ ├── agents/ # Agent implementations
│ │ ├── intentAgent/ # Intent detection agent
│ │ │ ├── __init__.py
│ │ │ └── intentAgentGuidelines.md
│ │ ├── orbitAgent/ # Framework assistance agent
│ │ │ ├── __init__.py
│ │ │ └── orbitAgentInstructions.md
│ │ ├── troubleshootingAgent/ # Technical troubleshooting agent
│ │ │ ├── __init__.py
│ │ │ ├── troubleshootingGuidelines.md
│ │ │ └── repo_details.json
│ │ └── mcpToolsAgent/ # MCP tools integration agent
│ │ ├── __init__.py
│ │ └── mcpToolsAgentGuidelines.md
│ │
│ ├── orchestrator/ # Main orchestration logic
│ │ ├── __init__.py
│ │ ├── messageTypeResolver.py
│ │ ├── queryMessageValidator.py
│ │ ├── intentAgentMessageValidator.py
│ │ ├── llmResponseValidator.py
│ │ └── actorMessageValidator.py
│ │
│ ├── chain/ # Chain of Responsibility pattern
│ │ └── baseHandler.py
│ │
│ ├── messages/ # Message type definitions
│ │ ├── query.py
│ │ ├── intent_agent_message.py
│ │ └── llm_message.py
│ │
│ ├── model/ # LLM model adapters
│ │ ├── model_interface.py
│ │ ├── model_adapter.py
│ │ ├── llama_model.py
│ │ ├── copilot_model.py
│ │ ├── open_ai.py
│ │ └── claude.py
│ │
│ └── services/ # External service integrations
│ ├── service_interface.py
│ ├── file/
│ ├── repo2Text/
│ ├── mcp_client/
│ └── singleton/
│
└── temp/ # Temporary files
mkdir -p src/agents/myCustomAgent
touch src/agents/myCustomAgent/__init__.py
touch src/agents/myCustomAgent/guidelines.mdCreate your agent in src/agents/myCustomAgent/__init__.py:
from pathlib import Path
from src.messages.intent_agent_message import IntentAgentMessage
from src.messages.llm_message import LLMMessage
from src.model.llama_model import LlamaModel
from src.model.model_adapter import ModelAdapter
from src.services.file import FileService
from thespian.actors import Actor
from loguru import logger
class MyCustomAgent(Actor):
"""
Custom agent for handling specific domain queries.
"""
def __init__(self):
super().__init__()
self.model = ModelAdapter(LlamaModel())
self.agent_name = "MyCustomAgent"
def receiveMessage(self, message, sender):
"""
Handle incoming messages from the orchestrator.
Args:
message: The incoming message (usually IntentAgentMessage)
sender: The address of the sending actor
"""
if isinstance(message, IntentAgentMessage):
query = message.query
logger.info(f"[{self.agent_name}] Received query: {query}")
# Load agent-specific instructions
file_path = Path(__file__).parent
instructions = FileService().read_file(file_path / "guidelines.md")
# Add any custom context or data processing here
complete_prompt = f"User Query: {query}"
# Generate response using LLM
response_text = self.model.generate(
prompt=complete_prompt,
instruction=instructions
)
# Wrap and send response back
response = LLMMessage(response_text)
self.send(sender, response)
else:
self.send(sender, f"Unknown message type for {self.agent_name}")Create src/agents/myCustomAgent/guidelines.md:
# MyCustomAgent Guidelines
You are a specialized agent for [your domain].
## Your Responsibilities:
- Handle queries related to [specific topic]
- Provide detailed and accurate responses
- Reference relevant documentation when available
## Response Format:
- Be concise but thorough
- Use code examples when applicable
- Structure your response with clear sectionsIn src/agent_registry/__init__.py, register your agent:
from src.agent_registry.register import AgentRegistry
from src.agents.myCustomAgent import MyCustomAgent
def register_agents():
agent_registry = AgentRegistry()
# ... existing registrations ...
agent_registry.register_agent(
"MyCustomAgent",
MyCustomAgent,
description="Agent specialized in [your domain description]."
)The IntentAgent will automatically consider your new agent based on its description in the registry. Ensure the description clearly indicates when to use your agent.
The framework supports multiple LLM backends through the adapter pattern:
from src.model.llama_model import LlamaModel
from src.model.model_adapter import ModelAdapter
# Default: llama3 on localhost:11434
model = ModelAdapter(LlamaModel())
# Custom model and URL
model = ModelAdapter(LlamaModel(
model_name="codellama",
model_url="http://192.168.1.100:11434"
))from src.model.copilot_model import CopilotModel
from src.model.model_adapter import ModelAdapter
model = ModelAdapter(CopilotModel("gpt-4o"))Implement the ModelInterface:
from src.model.model_interface import ModelInterface
class MyCustomModel(ModelInterface):
def __init__(self, api_key: str):
self.api_key = api_key
def generate(self, prompt: str, instruction: str) -> str:
"""Single-turn generation"""
# Your implementation here
pass
def chat(self, prompt: str, instruction: str) -> str:
"""Multi-turn conversation"""
# Your implementation here
passFile I/O operations:
from src.services.file import FileService
fs = FileService()
content = fs.read_file("path/to/file.txt")
data = fs.read_json_file("path/to/config.json")
fs.write_file("path/to/output.txt", "content")Convert GitHub repositories to text for LLM context:
from src.services.repo2Text import Repo2TextService
service = Repo2TextService()
result = service.call_service("https://github.com/user/repo", {"max_file_size": 5 * 1024 * 1024})
# Returns: {"summary": ..., "structure": ..., "content": ...}Connect to external MCP servers:
from src.services.mcp_client import MCPClientService
service = MCPClientService()
await service.connect_stdio_server(
name="github",
command="npx",
args=["-y", "@modelcontextprotocol/server-github"],
env={"GITHUB_PERSONAL_ACCESS_TOKEN": "token"}
)| Message Type | Purpose | Flow |
|---|---|---|
QueryMessage |
Wraps user's initial query | User → Orchestrator → IntentAgent |
IntentAgentMessage |
Contains detected intent and target agent | IntentAgent → Orchestrator |
LLMMessage |
Wraps LLM response | SpecializedAgent → Orchestrator → User |
The orchestrator uses the Chain of Responsibility pattern for message handling:
LLMResponseValidator → QueryMessageValidator → ActorMessageValidator → IntentAgentMessageValidator
Each validator checks if it can handle the message type:
- LLMResponseValidator: Extracts final response from
LLMMessage - QueryMessageValidator: Routes
QueryMessagetoIntentAgent - ActorMessageValidator: Handles Thespian system messages
- IntentAgentMessageValidator: Creates and routes to specialized agents
from src.chain.baseHandler import BaseHandler
from src.messages.my_message import MyMessage
class MyMessageValidator(BaseHandler):
def handle(self, context):
message, orchestrator_self, sender = context
if isinstance(message, MyMessage):
# Handle your custom message
return context
# Pass to next handler in chain
return super().handle(context)Register in messageTypeResolver.py:
my_validator = MyMessageValidator()
# Add to chain: .set_next(my_validator)We welcome contributions to ORBIT! This section provides guidelines for contributing to the project.
By participating in this project, you agree to maintain a respectful and inclusive environment. We expect all contributors to:
- Be respectful and considerate in all interactions
- Welcome newcomers and help them get started
- Focus on constructive feedback and collaboration
- Accept responsibility for mistakes and learn from them
Before creating an issue, please:
- Search existing issues to avoid duplicates
- Use the issue templates when available
- Provide detailed information including:
- Clear description of the problem or suggestion
- Steps to reproduce (for bugs)
- Expected vs actual behavior
- Environment details (OS, Python version, etc.)
- Relevant logs or error messages
Issue Labels:
| Label | Description |
|---|---|
bug |
Something isn't working |
enhancement |
New feature or improvement |
documentation |
Documentation updates |
good first issue |
Good for newcomers |
help wanted |
Extra attention needed |
agent |
Related to agent development |
model |
Related to LLM integrations |
# Fork and clone the repository
git clone https://github.com/YOUR_USERNAME/orbit.git
cd orbit
# Create development environment
conda create -n orbit-dev python=3.13
conda activate orbit-dev
# Install dependencies
pip install -r requirements.txt
# Install development dependencies
pip install pytest pytest-cov black isort flake8 mypy
# Set up pre-commit hooks (optional)
pip install pre-commit
pre-commit install-
Create a branch for your work:
git checkout -b feature/your-feature-name # or git checkout -b fix/issue-number-description -
Make your changes following our coding standards (see below)
-
Test your changes:
# Run tests pytest tests/ # Run with coverage pytest --cov=src tests/
-
Format your code:
# Format with black black src/ tests/ # Sort imports isort src/ tests/ # Check linting flake8 src/ tests/
-
Commit your changes:
git add . git commit -m "type: brief description of changes"
Follow the conventional commits specification:
type(scope): subject
body (optional)
footer (optional)
Types:
| Type | Description |
|---|---|
feat |
New feature |
fix |
Bug fix |
docs |
Documentation changes |
style |
Code style changes (formatting, etc.) |
refactor |
Code refactoring |
test |
Adding or updating tests |
chore |
Maintenance tasks |
agent |
Agent-related changes |
model |
Model adapter changes |
Examples:
feat(agent): add MCPToolsAgent for external tool integration
fix(orchestrator): resolve message routing deadlock
docs: update README with MCP integration guide
refactor(model): extract common LLM interface methods
-
Update documentation if your changes affect it
-
Ensure all tests pass and add new tests for new functionality
-
Create Pull Request with:
- Clear title following commit convention
- Description of changes and motivation
- Reference to related issues (e.g., "Fixes #123")
- Screenshots/recordings for UI changes
-
PR Review Checklist:
- Code follows project style guidelines
- Self-review completed
- Comments added for complex logic
- Documentation updated
- Tests added/updated
- No breaking changes (or documented if necessary)
-
Address review feedback promptly and respectfully
- Follow PEP 8 guidelines
- Use type hints for function signatures
- Maximum line length: 100 characters
- Use docstrings for all public functions and classes
from typing import Dict, Optional
from loguru import logger
class ExampleAgent(Actor):
"""
A well-documented agent class.
Attributes:
model: The LLM model adapter
agent_name: Unique identifier for this agent
"""
def __init__(self, model_name: str = "llama3") -> None:
"""
Initialize the agent.
Args:
model_name: Name of the LLM model to use
"""
super().__init__()
self.model = ModelAdapter(LlamaModel(model_name))
self.agent_name = "ExampleAgent"
def process_query(self, query: str, context: Optional[Dict] = None) -> str:
"""
Process a user query.
Args:
query: The user's input query
context: Optional additional context
Returns:
The generated response string
Raises:
ValueError: If query is empty
"""
if not query:
raise ValueError("Query cannot be empty")
logger.info(f"[{self.agent_name}] Processing query: {query}")
return self.model.generate(query, self._get_instructions())When creating new agents:
- Inherit from
Actorbase class - Implement
receiveMessagemethod - Create guidelines file (
.md) for LLM instructions - Register in
agent_registrywith clear description - Handle errors gracefully and log appropriately
- Return
LLMMessagefor responses
# Template for new agents
from thespian.actors import Actor
from src.messages.intent_agent_message import IntentAgentMessage
from src.messages.llm_message import LLMMessage
from loguru import logger
class NewAgent(Actor):
def __init__(self):
super().__init__()
self.agent_name = "NewAgent"
def receiveMessage(self, message, sender):
try:
if isinstance(message, IntentAgentMessage):
response = self._process(message.query)
self.send(sender, LLMMessage(response))
else:
logger.warning(f"[{self.agent_name}] Unknown message type")
except Exception as e:
logger.error(f"[{self.agent_name}] Error: {e}")
self.send(sender, LLMMessage(f"Error processing request: {e}"))- Place agents in
src/agents/<agent_name>/ - Place services in
src/services/<service_name>/ - Place message types in
src/messages/ - Place model adapters in
src/model/ - Keep tests mirroring source structure in
tests/
- Fix issues in existing functionality
- Improve error handling
- Resolve edge cases
- New agents for specialized domains
- Additional LLM model integrations
- New services and utilities
- MCP server/tool integrations
- Improve existing documentation
- Add examples and tutorials
- Translate documentation
- Create video guides
- Add unit tests
- Add integration tests
- Improve test coverage
- Performance testing
- Design improvements
- Accessibility enhancements
- User experience optimizations
- Questions: Open a Discussion
- Bugs: Open an Issue
- Security: Email security concerns privately (do not open public issues)
Contributors will be recognized in:
- The project's CONTRIBUTORS.md file
- Release notes for significant contributions
- Special acknowledgment for first-time contributors
Kartikeya Sharma
- GitHub: @R2D2-fwks
- Project Link: https://github.com/R2D2-fwks/orbit
- Thespian - Actor model framework
- Anthropic - Claude AI models
- OpenAI - GPT models
- Ollama - Local LLM runtime
- Model Context Protocol - MCP specification
Made with ❤️ by the ORBIT team