Intelligent multi-model router for self-hosted LLMs
Octoroute is a smart HTTP API that sits between your applications and your homelab's fleet of local LLMs. It automatically routes requests to the optimal model (8B, 30B, or 120B) based on task complexity, reducing compute costs while maintaining quality.
Think of it as a load balancer, but instead of distributing requests evenly, it sends simple queries to small models and complex reasoning tasks to larger ones.
Running multiple LLM sizes on your homelab is powerful, but routing requests manually is tedious:
- Manual routing is error-prone: You always use the 120B model "just in case," wasting compute.
- Simple heuristics aren't enough: "Short prompts → small model" misses nuance.
- LangChain is Python-only: You want native Rust performance and type safety.
Octoroute solves this with:
✅ Intelligent routing - Rule-based + LLM-powered decision making ✅ Zero-cost rules - Fast pattern matching for obvious cases (<1ms) ✅ Homelab-first - Built for local Ollama, LM Studio, llama.cpp deployments ✅ Rust native - Type-safe, async, low overhead ✅ Observable - Track every routing decision with structured logs
- At least one local LLM endpoint (Ollama, LM Studio, llama.cpp, etc.)
- Optional: Multiple model sizes (8B, 30B, 120B) for intelligent routing
- Optional: Rust 1.90+ (only needed if building from source)
Option 1: Pre-built binaries (fastest)
Download from GitHub Releases:
# Linux x86_64
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-linux-x86_64.tar.gz
tar -xzf octoroute-linux-x86_64.tar.gz
# Linux ARM64 (Raspberry Pi, etc.)
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-linux-aarch64.tar.gz
tar -xzf octoroute-linux-aarch64.tar.gz
# macOS Apple Silicon
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-macos-aarch64.tar.gz
tar -xzf octoroute-macos-aarch64.tar.gz
# macOS Intel
curl -LO https://github.com/slb350/octoroute/releases/latest/download/octoroute-macos-x86_64.tar.gz
tar -xzf octoroute-macos-x86_64.tar.gz
# Run
./octorouteOption 2: Cargo install (requires Rust)
cargo install octorouteOption 3: Build from source
git clone https://github.com/slb350/octoroute.git
cd octoroute
cargo build --release
./target/release/octorouteGenerate a starter config file:
# Print template to stdout
octoroute config
# Write template to file
octoroute config -o config.tomlOr create a config.toml manually:
[server]
host = "0.0.0.0"
port = 3000
[[models.fast]]
name = "qwen3-8b-instruct"
base_url = "http://localhost:11434/v1" # Ollama
max_tokens = 4096
temperature = 0.7
weight = 1.0
priority = 1
[[models.balanced]]
name = "qwen3-30b-instruct"
base_url = "http://localhost:1234/v1" # LM Studio
max_tokens = 8192
temperature = 0.7
weight = 1.0
priority = 1
[[models.deep]]
name = "gpt-oss-120b"
base_url = "http://localhost:8080/v1" # llama.cpp
max_tokens = 16384
temperature = 0.7
weight = 1.0
priority = 1
[routing]
strategy = "hybrid" # rule, llm, hybrid
router_tier = "balanced" # fast, balanced, deep (default: balanced)Send a chat request:
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{
"message": "Explain quantum computing in simple terms",
"importance": "normal",
"task_type": "question_answer"
}'Response:
{
"content": "Quantum computing is...",
"model_tier": "balanced",
"model_name": "qwen3-30b-instruct",
"routing_strategy": "rule"
}Drop-in replacement for OpenAI clients. Use Octoroute with any OpenAI-compatible SDK, framework, or tool - no code changes required.
curl http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"messages": [{"role": "user", "content": "Hello!"}]
}'Python (OpenAI SDK):
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:3000/v1",
api_key="not-needed" # Octoroute doesn't require auth
)
response = client.chat.completions.create(
model="auto", # Let Octoroute pick the best model
messages=[{"role": "user", "content": "Explain quantum computing"}]
)
print(response.choices[0].message.content)LangChain:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="http://localhost:3000/v1",
api_key="not-needed",
model="auto"
)
response = llm.invoke("What is the meaning of life?")TypeScript/JavaScript:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://localhost:3000/v1',
apiKey: 'not-needed',
});
const response = await client.chat.completions.create({
model: 'auto',
messages: [{ role: 'user', content: 'Hello!' }],
});The model field controls routing:
| Value | Behavior |
|---|---|
auto |
Intelligent routing - Octoroute analyzes the request and picks the best tier |
fast |
Route directly to Fast tier (8B models) |
balanced |
Route directly to Balanced tier (30B models) |
deep |
Route directly to Deep tier (120B models) |
qwen3-8b |
Bypass routing - use specific endpoint by name |
Full SSE streaming support - works with any streaming-capable client:
stream = client.chat.completions.create(
model="auto",
messages=[{"role": "user", "content": "Write a poem"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | Chat completions (streaming & non-streaming) |
/v1/models |
GET | List available models and tiers |
See API Reference for complete documentation.
Octoroute supports three routing strategies:
Pattern matching on request metadata:
- Casual chat + <256 tokens → 8B model
- Deep analysis or high importance → 120B model
- Everything else → 30B model
Latency: <1ms (no LLM overhead)
Uses a 30B "router brain" to analyze the request and choose the optimal model.
Latency: ~100-500ms (router invocation)
Try rules first (fast path), fall back to LLM for ambiguous cases.
Latency: <1ms for rule matches, ~100-500ms for LLM fallback
Octoroute provides three levels of observability to help you understand routing decisions and system performance:
Built-in structured logging via tracing:
# Set log level via environment variable
RUST_LOG=info cargo run
# Available levels: trace, debug, info, warn, error
RUST_LOG=octoroute=debug cargo runWhat you get:
- Request metadata (prompt length, importance, task type)
- Routing decisions (which strategy was used, which model was selected)
- Health check status updates
- Error traces with full context
Metrics are always enabled and available at the /metrics endpoint:
# Build and run
cargo build --release
./target/release/octoroute
# Metrics endpoint available at http://localhost:3000/metricsAvailable metrics:
octoroute_requests_total{tier, strategy}- Request counts by tier and routing strategyoctoroute_routing_duration_ms{strategy}- Routing decision latency histogramoctoroute_model_invocations_total{tier}- Model invocations by tier- Plus 3 health/observability metrics (see Observability Guide)
Prometheus scraping config:
# prometheus.yml
scrape_configs:
- job_name: 'octoroute'
static_configs:
- targets: ['localhost:3000']
metrics_path: '/metrics'
scrape_interval: 15sWhy Direct Prometheus? We use the prometheus crate directly for simplicity and homelab-friendliness:
- Works with existing Prometheus/Grafana setups out of the box
- No intermediate abstraction layers - just Prometheus
- Mature, stable crate with broad ecosystem support
┌──────────────────────────────────────────────────┐
│ Client Applications │
│ (OpenAI SDK, LangChain, CLI, curl, etc.) │
└─────────────────────┬────────────────────────────┘
│
┌──────────────┴──────────────┐
│ │
▼ ▼
/v1/chat/completions /chat (legacy)
(OpenAI-compatible) (Native API)
│ │
└──────────────┬──────────────┘
▼
┌──────────────────────────────────────────────────┐
│ Octoroute API (Axum + Tokio) │
│ ┌────────────────────────────────────────────┐ │
│ │ Router (Rule/LLM/Hybrid) │ │
│ └────────────────────┬───────────────────────┘ │
│ │ │
│ ▼ Model Selection │
│ ┌────────────────────────────────────────────┐ │
│ │ open-agent-sdk Client │ │
│ │ (streaming or buffered responses) │ │
│ └────────────────────┬───────────────────────┘ │
└───────────────────────┼──────────────────────────┘
│
▼
┌──────────────────────────────────────────────────┐
│ Local Model Servers │
│ 8B (Ollama) | 30B (LM Studio) | 120B (llama) │
└──────────────────────────────────────────────────┘
Built on:
- open-agent-sdk: Rust SDK for local LLM orchestration
- Axum: Ergonomic web framework
- Tokio: Async runtime
Comprehensive documentation is available in the /docs directory:
- Architecture Guide - System design, routing strategies, data flow, and technical decisions
- API Reference - Complete HTTP API documentation with request/response schemas and examples
- Configuration Guide - Detailed configuration reference with examples for different deployment scenarios
- Observability Guide - Logging, Prometheus metrics, Grafana dashboards, and monitoring setup
- Development Guide - Testing, benchmarking, code quality, and contributing guidelines
- Deployment Guide - Homelab deployment with systemd, Docker, reverse proxy, and security hardening
Submit a chat request for intelligent routing.
Request:
{
"message": "Your question or task",
"importance": "low" | "normal" | "high",
"task_type": "casual_chat" | "code" | "creative_writing" | "deep_analysis" | "document_summary" | "question_answer"
}Response:
{
"content": "Generated text",
"model_tier": "fast" | "balanced" | "deep",
"model_name": "qwen3-30b-instruct",
"routing_strategy": "rule" | "llm"
}Health check endpoint with system status.
Response: 200 OK with JSON body:
{
"status": "OK",
"health_tracking_status": "operational",
"metrics_recording_status": "operational",
"background_task_status": "operational",
"background_task_failures": 0
}List available models and their status.
Response:
{
"models": [
{
"name": "qwen3-8b-instruct",
"tier": "fast",
"endpoint": "http://localhost:11434/v1",
"healthy": true,
"last_check_seconds_ago": 2,
"consecutive_failures": 0
}
]
}See Configuration Guide for full configuration options:
- Server settings: Host, port, timeouts
- Model endpoints: Names, URLs, token limits
- Routing strategy: Rule, LLM, or hybrid
- Router tier: Which model makes routing decisions
- Observability: Log level, metrics
Understanding the difference between router tier and target tier is crucial for LLM and Hybrid strategies:
-
Router Tier (
router_tier): Which model tier (fast/balanced/deep) makes the routing decision- Used by LLM and Hybrid strategies only
- Analyzes the request and decides which target tier should handle it
- Default:
balanced(good balance of speed and accuracy) - Example: A Balanced tier model decides whether to route to Fast, Balanced, or Deep
-
Target Tier: Which model tier actually processes the user's request
- Determined by the routing decision
- Can be Fast (8B), Balanced (30B), or Deep (120B)
- The model that generates the final response to the user
Example Flow:
User Request → Router Tier (balanced/30B) analyzes request
→ Decides: "This is simple, use Fast tier"
→ Target Tier (fast/8B) processes request
→ Response to user
Why separate them?
- Faster routing: Use Fast tier (8B) for routing decisions to minimize overhead
- More accurate routing: Use Balanced tier (30B) for better routing decisions
- Don't waste resources: Use Deep tier (120B) for processing, not routing
# Install Rust 1.90+ (required for Edition 2024)
rustup toolchain install 1.90
rustup default 1.90
rustup component add rustfmt clippy
# Install development tools
cargo install just cargo-nextest# Development build
cargo build
# Release build (optimized, includes Prometheus metrics)
cargo build --release# Run all tests
cargo test
# Run with nextest (faster)
cargo nextest run
# Run integration tests
cargo test --test '*'# Format code
cargo fmt
# Lint with clippy
cargo clippy --all-targets --all-features -- -D warningsQuick Command Reference (using justfile):
| Command | Description |
|---|---|
just check |
Run clippy and format checks |
just test |
Run all tests |
just bench |
Run benchmarks |
just watch |
Auto-rebuild on file changes |
just ci |
Complete CI check (clippy + format + tests) |
See just --list for all 20+ available commands.
# With cargo (uses config.toml by default)
cargo run
# Or use release binary
./target/release/octoroute
# With custom config file
octoroute --config /path/to/custom-config.toml
# With environment variables
RUST_LOG=debug cargo run# Start server (default: looks for config.toml)
octoroute
# Start server with custom config
octoroute --config custom.toml
# Generate config template to stdout
octoroute config
# Write config template to file
octoroute config -o config.toml
# Show version
octoroute --version
# Show help
octoroute --helpFeatures implemented:
- ✅ OpenAI-compatible API (
/v1/chat/completions,/v1/models) with SSE streaming - ✅ Legacy API with
/chat,/health,/models,/metricsendpoints - ✅ Multi-tier model selection (fast/balanced/deep)
- ✅ Rule-based + LLM-based hybrid routing
- ✅ Priority-based routing with weighted distribution
- ✅ Health checking with automatic endpoint recovery
- ✅ Retry logic with request-scoped exclusion
- ✅ Timeout enforcement (global + per-tier overrides)
- ✅ Prometheus metrics
- ✅ Performance benchmarks (Criterion)
- ✅ CI/CD pipeline (GitHub Actions)
- ✅ Comprehensive config validation
- ✅ Development tooling (justfile with 20+ recipes)
- ✅ CLI with config generation (
octoroute configand--configflag) - ✅ Comprehensive test coverage (348+ tests across 51 integration test files)
- ✅ Zero clippy warnings
- ✅ Zero tech debt
Route simple commands to 8B, complex reasoning to 120B:
import requests
def ask_llm(message, importance="normal"):
response = requests.post("http://localhost:3000/chat", json={
"message": message,
"importance": importance
})
return response.json()["content"]
# Uses 8B model (fast)
ask_llm("What's the weather like?")
# Uses 120B model (intelligent routing)
ask_llm("Design a distributed consensus algorithm", importance="high")Share your LLM fleet with family/friends, automatically balancing load:
- Bob's casual question → 8B
- Alice's code review → 30B
- Charlie's essay writing → 120B
Integrate with IDE/scripts to route tasks intelligently:
# Quick code explanation (8B)
curl -X POST http://localhost:3000/chat -d '{"message":"Explain this function"}'
# Deep code review (120B)
curl -X POST http://localhost:3000/chat -d '{"message":"Review for security issues", "importance":"high"}'Routing latency (tested on M2 Mac):
| Strategy | Latency | Notes |
|---|---|---|
| Rule-based | <1ms | Pure CPU, no LLM |
| LLM-based | ~250ms | With 30B router model |
| Hybrid | <1ms (rule hit) | Best of both worlds |
Throughput: Limited by model inference, not routing overhead.
Contributions welcome! Please see Development Guide for guidelines.
Areas for contribution:
- Additional routing strategies (e.g., RL-based, tool-based)
- Caching layer for repeated prompts
- Web UI for routing visualization
- More comprehensive benchmarks
- Function calling / tool use support
A: LangChain is Python-only and has significant overhead. Octoroute is Rust-native, type-safe, and designed specifically for local/self-hosted LLMs with minimal latency.
A: Technically yes (they're OpenAI-compatible), but Octoroute is optimized for local deployments. Cloud APIs already handle routing internally.
A: Any OpenAI-compatible endpoint (Ollama, LM Studio, llama.cpp, vLLM, etc.). Tested with Qwen, Llama, Mistral families.
A: Yes! The OpenAI-compatible endpoint (/v1/chat/completions) supports full SSE streaming. Set stream: true in your request and receive tokens as they're generated. The legacy /chat endpoint returns buffered responses only.
A: A 30B model analyzes your prompt + metadata and outputs one of: FAST, BALANCED, DEEP. This decision is then used to route the actual request.
A: Octoroute provides two observability levels:
- Structured logs (always enabled): Use
RUST_LOG=infoto see routing decisions and health status - Metrics (always enabled): Prometheus metrics exposed at
/metricsendpoint
For homelab deployments, we recommend Prometheus + Grafana for metrics visualization.
A: The /metrics endpoint is unauthenticated by design for simplicity in homelab deployments. It exposes operational metrics like request counts and routing latency.
Security recommendations:
- Homelab: Ensure Octoroute is only accessible on trusted networks (not exposed to the internet)
- Production: Use a reverse proxy (nginx, Caddy) to add authentication:
location /metrics { auth_basic "Metrics"; auth_basic_user_file /etc/nginx/.htpasswd; proxy_pass http://octoroute:3000/metrics; }
- Alternative: Use firewall rules to restrict
/metricsto Prometheus server IP only
The metrics endpoint does NOT expose:
- User messages or content
- API keys or credentials
- Individual request details (only aggregates)
For internet-exposed deployments, always use authentication or IP restrictions.
A: We chose the direct prometheus crate (v0.14) for simplicity and homelab-friendliness:
- Simplicity: No intermediate abstraction layers - just Prometheus
- Homelab-friendly: Works with existing Prometheus/Grafana setups out of the box, no OTEL collector required
- Stability: Mature, actively maintained library
The /metrics endpoint works with your existing Prometheus scraper without any additional infrastructure.
MIT License - see LICENSE for details.
- Built on top of open-agent-sdk-rust
- Inspired by LangChain's router chains
- Thanks to the Rust, Tokio, and Axum communities
Made with 🦑 for homelab enthusiasts
Route smarter, compute less.