Radium is an open-source, unified HTTP API gateway for accessing multiple AI model providers. Built in Rust as a resource-efficient, high-performance open-source LLM proxy, Radium acts as a central orchestrator, handling every request and response with precision. It delivers uniform endpoints for text and chat completions, intelligent fallback logic, and complete observability, transforming client-to-LLM interactions into a seamless, elegantly managed experience.
- π High Performance: Leverages Rust's speed and memory safety for low-latency, high-throughput proxying.
- π Multi-Provider Support: Seamlessly connects to OpenAI, Anthropic, AWS Bedrock, Cohere, etc.
- β‘ Flexible Integration: Minimal configuration required for various LLM backends.
- π Built-in Monitoring: Prometheus metrics and comprehensive observability.
- π οΈ Developer-Friendly: Simple setup, clear documentation, and extensible design.
- π Fallback Support: Automatic failover between providers for reliability.
- π CORS Support: Configurable Cross-Origin Resource Sharing.
- π Structured Logging: Configurable logging with rotation and timestamps.
- π³ Docker Ready: Container support with multi-platform builds.
- π Scalable Architecture: Connection pooling and request timeout handling.
- π Open Source: Licensed under Apache 2.0.
| Provider | Key | Configuration Section | Status |
|---|---|---|---|
| OpenAI | openai |
[openai] |
β |
| Anthropic | anthropic |
[anthropic] |
β³ |
| Azure OpenAI | azure |
[azure_openai] |
β³ |
| AWS Bedrock | bedrock |
[bedrock] |
β³ |
| Cohere | cohere |
[cohere] |
β³ |
| Google Vertex AI | vertex |
[vertext] |
β³ |
- Rust: Ensure you have Rust installed (version 1.93 or later). Install via rustup
- Git: Required to clone the repository
- API Keys: Valid API keys for your chosen LLM providers
- Optional: Docker for containerized deployment
- Clone the repository:
git clone https://github.com/riipandi/radium.git && cd radium- Build the project:
# Using cargo directly
cargo build --release
# Or using just (recommended)
just build- Set up configuration:
# Copy example configuration
cp config.toml.example config.toml
# Edit with your API keys and settings
nano config.tomlCreate your config.toml file based on the config.toml.example
# Using cargo
cargo run -- serve
# Using just (with auto-reload for development)
just dev
# Using built binary
./target/release/radium serve
# With custom config path
./target/release/radium serve -config /path/to/config.tomlRadium provides OpenAI-compatible API endpoints:
POST /v1/chat/completions- Chat completions with conversation contextPOST /v1/text/completions- Simple text completionsGET /metrics- Prometheus metrics for monitoring
http://localhost:8000
Here are example benchmarks using bombardier HTTP benchmarking tool:
Test Configuration:
- Concurrent connections: 125
- Number of requests: 100,000
- Target endpoint:
GET /healthz - Environment: local development server
bombardier -c 125 -n 100000 -m GET http://localhost:8000/healthzExample Results:
Statistics Avg Stdev Max
Reqs/sec 35160.01 8333.65 46008.85
Latency 3.55ms 2.04ms 44.74ms
HTTP codes:
1xx - 0, 2xx - 100000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 16.64MB/s
# Light load - 50 concurrent connections, 1000 requests
bombardier -n 1000 -c 50 http://localhost:8000/healthz
# Medium load - 100 concurrent connections, 2500 requests with 10s duration
bombardier -d 10s -n 2500 -c 100 http://localhost:8000/healthz
# Heavy load - 500 concurrent connections, 10000 requests
bombardier -d 10s -n 10000 -c 500 http://localhost:8000/healthz
# Sustained load test - 30 seconds duration
bombardier -c 100 -d 30s http://localhost:8000/healthzFor testing actual LLM proxy endpoints:
# Test chat completions endpoint (requires valid API key)
bombardier -n 100 -c 10 -m POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-b '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Hello"}]}' \
http://localhost:8000/v1/chat/completionsPerformance Notes:
- Actual LLM endpoints performance depends on upstream provider latency
- Connection pooling and keep-alive significantly improve throughput
- Memory usage remains stable under high concurrent load
For detailed documentation, see:
- HTTP Transport Documentation - Complete API reference
- OpenAPI Specification - Machine-readable API spec
- Example Requests - Sample requests using httl
Radium provides comprehensive monitoring through Prometheus metrics at /metrics endpoint, including:
- Request counts by provider, model, and status
- Request latency histograms
- Token usage statistics
- Error rates and types
- Connection pool statistics
Radium includes full Docker support with multi-platform builds:
# Build Docker image
just docker-build
# Run with Docker
just docker-run serve
# Using Docker Compose
just compose-upWe welcome contributions to make Radium even better!
- Read our Contributing Guidelines for detailed guidelines
- Fork the repository and create a feature branch
- Submit a pull request with a clear title and description
- Join the discussion on GitHub Issues
Join the flow. Amplify your your AI-powered applications with Radium! π
Radium is licensed under the Apache License 2.0. See the LICENSE file for more information.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this project by you shall be licensed under the Apache License 2.0, without any additional terms or conditions.
Copyrights in this project are retained by their contributors.
π€« Psst! If you like my work you can support me via GitHub sponsors.