Multi-Agent General Intelligence system built with microservices architecture for workflow orchestration, RAG (Retrieval-Augmented Generation), and GitOps automation.
| Service | Port | Technology | Description |
|---|---|---|---|
| API Gateway | 8000 | Node.js/Express | Authentication, rate limiting, reverse proxy |
| Auth Service | 8001 | Node.js/Express | JWT authentication with JWKS |
| RAG Ingestion | 8002 | Python/FastAPI | Source registration, GitHub/Confluence connectors |
| RAG Indexer | 8003 | Python/FastAPI | Text chunking, embeddings, Qdrant indexing |
| RAG Query | 8004 | Python/FastAPI | Vector search, reranking, LLM synthesis |
| Workflow Orchestrator | 8005 | Python/FastAPI | LangGraph workflows, SSE streaming |
| MAGI Agents | 8006 | Python/FastAPI | Multi-agent consensus (AutoGen) |
| GitOps PR Automation | 8007 | Python/FastAPI | GitHub App PR creation and merge automation |
| Workflow UI | 3001 | React/Vite | ReactFlow workflow editor |
| Component | Port(s) | Description |
|---|---|---|
| PostgreSQL | 5432 | Primary database for RAG metadata |
| Redis | 6379 | Celery broker, caching |
| Qdrant | 6333, 6334 | Vector database for embeddings |
| MinIO | 9000, 9001 | S3-compatible object storage |
| Jaeger | 16686, 14268, 4317 | Distributed tracing |
| OpenTelemetry Collector | 4319, 8888, 8889 | Trace/metrics collection |
| Prometheus | 9090 | Metrics storage and querying |
| Grafana | 3000 | Observability dashboards |
- Docker & Docker Compose
- OpenAI API key (required)
- GitHub App credentials (for GitOps automation)
- Optional: Anthropic, Cohere API keys for additional LLM providers
-
Clone the repository
git clone <repository-url> cd MAGI-Agent
-
Configure environment variables
cp .env.example .env # Edit .env with your API keys and configurationRequired variables:
OPENAI_API_KEY: OpenAI API key for LLM operationsPOSTGRES_PASSWORD: Database passwordJWT_SECRET: Secure random string for JWT signingGITHUB_APP_ID,GITHUB_INSTALLATION_ID,GITHUB_PRIVATE_KEY_PATH: For GitOps automationGIT_ORG: GitHub organization name
-
Start all services
docker-compose up -d
-
Verify services are running
docker-compose ps
-
Access the applications
- Workflow UI: http://localhost:3001
- API Gateway: http://localhost:8000
- Grafana: http://localhost:3000 (admin/admin)
- Prometheus: http://localhost:9090
- Jaeger UI: http://localhost:16686
- MinIO Console: http://localhost:9001
Ingestion → Indexing → Query
- RAG Ingestion (
POST /api/sources): Register GitHub repos or Confluence spaces - RAG Indexer: Automatic chunking and embedding generation
- RAG Query (
POST /api/query): Semantic search with LLM synthesis
Three specialized agents (Balthazar, Melchior, Casper) debate and reach consensus on complex decisions using AutoGen framework.
Endpoint: POST /api/consensus
{
"question": "Should we implement feature X?",
"context": "Additional context...",
"max_rounds": 5
}LangGraph-based workflow execution with SSE streaming for real-time updates.
Endpoints:
POST /api/workflows/execute: Start workflowGET /api/workflows/stream/{workflow_id}: SSE stream
Automated pull request creation, status tracking, and merge operations.
Endpoints:
POST /api/gitops/pr/create: Create PRGET /api/gitops/pr/status/{repo_name}/{pr_number}: Check PR statusPOST /api/gitops/pr/merge: Auto-merge PR after checks pass
Each service can be developed independently:
cd services/<service-name>
cp .env.example .env
# Edit .env with required configuration
# For Python services:
pip install -r requirements.txt
python src/main.py
# For Node.js services:
npm install
npm startAll services are instrumented with OpenTelemetry for distributed tracing and metrics:
- Traces exported to Jaeger via OTLP Collector
- Metrics scraped by Prometheus
- Dashboards available in Grafana
For services using PostgreSQL (RAG Ingestion, RAG Indexer):
# Migrations are handled via SQLAlchemy models
# Tables are created automatically on first runAccess Grafana at http://localhost:3000 (admin/admin)
Pre-configured data sources:
- Prometheus (metrics)
- Jaeger (traces)
All services expose /metrics endpoints:
- API Gateway: Request rates, latencies, rate limit hits
- RAG Services: Query performance, indexing throughput
- MAGI Agents: Consensus rounds, agent response times
Distributed traces show request flow across services:
- API Gateway → Auth validation
- API Gateway → Downstream service
- Service → Database/Vector DB
- Service → External APIs (OpenAI, GitHub)
- Secrets: All secrets configured via environment variables only
- JWT Authentication: Auth service issues tokens validated by API Gateway
- Rate Limiting: Configured on API Gateway (100 req/min default)
- GitHub App: Uses private key authentication for PR automation
- CORS: Configured for frontend origin
Each service exposes OpenAPI (Swagger) docs at /docs:
- API Gateway: http://localhost:8000/docs
- RAG Ingestion: http://localhost:8002/docs
- RAG Indexer: http://localhost:8003/docs
- RAG Query: http://localhost:8004/docs
- Workflow Orchestrator: http://localhost:8005/docs
- MAGI Agents: http://localhost:8006/docs
- GitOps PR Automation: http://localhost:8007/docs
# Check service logs
docker-compose logs <service-name>
# Restart specific service
docker-compose restart <service-name>Ensure PostgreSQL is healthy:
docker-compose ps postgres
docker-compose logs postgresCheck Qdrant health:
curl http://localhost:6333/healthz
docker-compose logs qdrantVerify GitHub App credentials in .env:
GITHUB_APP_ID: Numeric app IDGITHUB_INSTALLATION_ID: Installation ID for your orgGITHUB_PRIVATE_KEY_PATH: Path to.pemfile (mount as volume)
See individual service PRDs in prompts/workflows/prd/ for detailed specifications.
[Your License Here]