AI-Powered Multi-Model Orchestration Engine
Cortan Orchestrator is an emerging AI orchestration platform designed to coordinate multiple AI models and services. Built with modern C++20, it provides a foundation for AI model management, workflow coordination, and real-time processing capabilities.
This is an active development project establishing a solid architectural foundation with production-ready core components. Currently ~70% complete with fully implemented Event Bus and HTTP Client systems.
- π Event Bus System: Complete async event processing with priority queues
- π HTTP Client: Enterprise-grade with SSL/TLS, SNI, timeouts, thread safety
- π€ Multi-Model Coordination: AI model management framework (25% complete)
- β‘ High-Performance Core: C++20 coroutines and async I/O foundation (20% complete)
- π Network Integration: WebSocket support and connection pooling (60% complete)
- π§ Modular Architecture: Terminal interface and core services (15-20% complete)
- π Real-time Monitoring: Performance metrics and logging (basic setup)
- π Security-First: Input validation and security management (75% complete)
- π§ͺ Testing Framework: Unit testing setup with Google Test
- π Benchmarking: Performance analysis with Google Benchmark
Cortan Orchestrator (~70% Complete)
βββ β
Core Engine (100%) # Event system, workflow management
β βββ π Event Bus # Complete async event processing
β βββ π Thread Pool # Functional task execution
β βββ π [TODO] Others # Memory pool, logger, config, etc.
βββ π AI Layer (25%) # Model management, conversation handling
β βββ β
Model Manager # Functional model selection
β βββ β
Input Validator # Functional input checking
β βββ π [TODO] Others # Conversation, context, security, etc.
βββ π Network Layer (60%) # HTTP/WebSocket clients, connection pooling
β βββ β
HTTP Client # Complete SSL/TLS implementation
β βββ π [TODO] Others # WebSocket, connection pooling
βββ π Terminal Interface (15%) # Interactive shell, command processing
β βββ π [TODO] All # Command processor, shell, completion, etc.
βββ π Security Layer (75%) # Input validation, access control
βββ π Basic Framework # Security manager setup
βββ π [TODO] Advanced # Rate limiting, audit logging
| Component | Status | Progress | Ready for Use |
|---|---|---|---|
| Event Bus | β Complete | 100% | Yes - Production ready |
| HTTP Client | β Complete | 100% | Yes - Enterprise grade |
| Thread Pool | β Complete | 100% | Yes - Functional |
| Model Manager | β Complete | 90% | Yes - Basic functionality |
| Input Validator | β Complete | 70% | Yes - Basic validation |
| Terminal Interface | π Skeleton | 15% | No - TODO placeholders |
| Core Services | π Partial | 20% | Limited - Mostly TODOs |
| AI Orchestration | π Skeleton | 25% | No - Framework only |
| Network Layer | π Partial | 60% | HTTP only |
| Security Layer | π Basic | 75% | Framework - Limited features |
- Event-driven architecture with complete async processing
- HTTP/HTTPS communication with SSL/TLS, SNI, and timeout handling
- Multi-threaded task execution via the thread pool
- Basic AI model management for Ollama integration
- Input validation for AI model interactions
- WebSocket real-time communication
- Connection pooling for performance
- Complete terminal interface
- Full AI orchestration capabilities
- Advanced security features
- macOS: 12.0+ (tested on macOS 14 Sonoma)
- CMake: 3.20+
- C++ Compiler: Apple Clang 14+ or GCC 11+
- Conan: 2.0+ (recommended for dependencies)
# Clone the repository
git clone <repository-url>
cd cortan
# Build with Conan (recommended)
./cmake/build.sh
# Or build with system packages
./cmake/build.sh --no-conan# Run the orchestrator
./build/cortan
# Run with specific command
./build/cortan "process-model llama3:8b"
# Debug mode
./cmake/build.sh --debug
./build/cortancortan/
βββ cmake/ # Build configuration
β βββ CMakeLists.txt # CMake build files
β βββ conanfile.py # Conan package management
β βββ build.sh # Build automation script
βββ src/ # Source code
β βββ main.cpp # Application entry point
β βββ core/ # Core orchestration components
β βββ ai/ # AI model management
β βββ network/ # Network communication
β βββ terminal/ # User interface
βββ include/cortan/ # Public API headers
βββ tests/ # Unit tests
βββ benchmarks/ # Performance benchmarks
βββ config/ # Configuration files
βββ scripts/ # Utility scripts
| Option | Description | Default |
|---|---|---|
--no-conan |
Use system packages instead of Conan | false |
--debug |
Build in Debug mode | false |
--no-ai |
Disable AI orchestration features | false |
--no-tests |
Don't build unit tests | false |
--no-benchmarks |
Don't build benchmarks | false |
- nlohmann_json/3.11.3 - JSON processing
- spdlog/1.12.0 - High-performance logging
- boost/1.82.0 - ASIO networking
- libcurl/8.4.0 - HTTP client
- openssl/3.1.3 - SSL/TLS support
brew install cmake nlohmann-json spdlog curl boostThe project follows modern C++ best practices:
- C++20 features with coroutines
- RAII resource management
- Exception safety
- Async programming patterns
# Build with tests
./cmake/build.sh --tests
# Run test suite
cd build && make test_quick# Build with benchmarks
./cmake/build.sh --benchmarks
# Run performance tests
cd build && make perf_check#include <cortan/core/event_system.hpp>
cortan::core::EventBus bus;
bus.subscribe("ai.request", handler);#include <cortan/ai/model_manager.hpp>
cortan::ai::ModelManager manager;
manager.addModel(std::make_unique<OllamaModel>("llama3"));#include <cortan/network/http_client.hpp>
cortan::network::HttpClient client;
auto response = client.get("https://api.example.com");- Input validation for all AI model interactions
- Rate limiting and access control
- Secure communication with TLS/SSL
- Audit logging for all operations
- Concurrent Processing: Multi-threaded architecture
- Memory Efficient: Custom allocators and pooling
- Network Optimized: Connection pooling and keep-alive
- Async I/O: Non-blocking operations with Boost.ASIO
We welcome contributions! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Write tests for new functionality
- Ensure all tests pass
- Submit a pull request
# Clone and setup
git clone <repository-url>
cd cortan
# Install development dependencies
./cmake/build.sh --debug --tests --benchmarks
# Run development server
./build/cortan --dev-modeThis project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
usCopyright (C) 2025 Space Labs AI
Copyright (C) 2025 Rishab Nuguru
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
If you use Cortan Orchestrator in your research or project, please cite it as follows:
@software{nuguru_cortan_orchestrator_2025,
author = {Nuguru, Rishab},
title = {Cortan Orchestrator: AI-Powered Multi-Model Orchestration Engine},
year = 2025,
publisher = {Space Labs AI},
url = {https://github.com/rishabnuguru/cortan-orchestrator},
license = {AGPL-3.0},
abstract = {Cortan Orchestrator is a prototype implementation of a modern AI orchestration platform built with C++20, featuring coroutine-based architecture, async I/O, and modular design for coordinating multiple AI models and services.}
}Nuguru, R. (2025). Cortan Orchestrator: AI-Powered Multi-Model Orchestration Engine [Computer software]. Space Labs AI. https://github.com/rishabnuguru/cortan-orchestrator
A CITATION.cff file is included in the repository root for easy citation import into citation management tools.
Author: Rishab Nuguru Company: Space Labs AI
- Built with modern C++20 features and best practices
- Inspired by production AI orchestration systems
- Thanks to the open-source community for amazing libraries
Space Labs AI
- Author: Rishab Nuguru
- Email: [spacelabsai@gmail.com]
- Website: [company website]
- β Complete: Event Bus system with async processing and priority queues
- β Complete: Enterprise-grade HTTP Client with SSL/TLS, SNI, timeouts
- β Complete: Thread pool implementation for concurrent task execution
- β Complete: Basic AI model management and input validation
- π Partial: Security framework and modular architecture foundation
- π Setup: Testing framework and benchmarking infrastructure
- π Setup: macOS optimization with Apple Clang and Conan dependency management
- π TODO: Multi-model orchestration, WebSocket support, terminal interface
- WebSocket real-time communication implementation
- Connection pooling and performance optimization
- Complete terminal interface development
- Full AI orchestration capabilities
- Advanced security and monitoring features
- WebSocket Implementation - Real-time bidirectional communication
- Connection Pooling - HTTP client performance optimization
- Terminal Interface Completion - User interaction and command processing
- AI Orchestration Enhancement - Multi-model coordination capabilities
- Core Services Implementation - Memory management, logging, configuration
Made with β€οΈ by Space Labs AI π