The fastest way to build AI applications that never go down.
Bifrost is a high-performance AI gateway that connects you to 8+ providers (OpenAI, Anthropic, Bedrock, and more) through a single API. Get automatic failover, load balancing, and zero-downtime deployments in under 30 seconds.
π Just launched: Native MCP (Model Context Protocol) support for seamless tool integration
β‘ Performance: Adds only 11Β΅s latency while handling 5,000+ RPS
π‘οΈ Reliability: 100% uptime with automatic provider failover
Go from zero to production-ready AI gateway in under a minute. Here's how:
What You Need
- Any AI provider API key (OpenAI, Anthropic, Bedrock, etc.)
- Docker OR Go 1.23+ installed
- 30 seconds of your time β°
π For detailed setup guides with multiple providers, advanced configuration, and language examples, see Quick Start Documentation
Step 1: Start Bifrost (choose one)
# π³ Docker (easiest - zero config needed!)
docker pull maximhq/bifrost
docker run -p 8080:8080 maximhq/bifrost
# π§ Or install Go binary (Make sure Go is in your PATH)
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
bifrost-http -port 8080
Step 2: Open the built-in web interface
# π₯οΈ Configure visually - no config files needed!
# macOS:
open http://localhost:8080
# Linux:
xdg-open http://localhost:8080
# Windows:
start http://localhost:8080
# Or simply open http://localhost:8080 manually in your browser
Step 3: Add your provider via the web UI or API
# Via Web UI: Just click "Add Provider" and enter your OpenAI API key
# Or via API:
curl -X POST http://localhost:8080/providers \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"keys": [{"value": "env.OPENAI_API_KEY", "models": ["gpt-4o-mini"], "weight": 1.0}]
}'
# Make sure to set the environment variable OPENAI_API_KEY in bifrost's session, or pass it as a flag in Docker (docker run -e OPENAI_API_KEY maximhq/bifrost).
Step 4: Test it works
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Hello from Bifrost! π"}
]
}'
π Boom! You're done!
Your AI gateway is now running with a beautiful web interface. You can:
- π₯οΈ Configure everything visually - No more JSON files!
- π Monitor requests in real-time - See logs, analytics, and metrics
- π Add providers and MCP clients on-the-fly - Scale and failover without restarts
- π Drop into existing code - Zero changes to your OpenAI/Anthropic apps
Want more? See our Complete Setup Guide for multi-provider configuration, failover strategies, and production deployment.
- Bifrost
- π₯οΈ Built-in Web UI: Visual configuration, real-time monitoring, and analytics dashboard - no config files needed
- π Zero-Config Startup & Easy Integration: Start immediately with dynamic provider configuration, or integrate existing SDKs by simply updating the
base_url
- one line of code to get running - π Multi-Provider Support: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
- π‘οΈ Fallback Mechanisms: Automatically retry failed requests with alternative models or providers
- π Dynamic Key Management: Rotate and manage API keys efficiently with weighted distribution
- β‘ Connection Pooling: Optimize network resources for better performance
- π― Concurrency Control: Manage rate limits and parallel requests effectively
- π Flexible Transports: Multiple transports for easy integration into your infra
- ποΈ Plugin First Architecture: No callback hell, simple addition/creation of custom plugins
- π οΈ MCP Integration: Built-in Model Context Protocol (MCP) support for external tool integration and execution
- βοΈ Custom Configuration: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
- π Built-in Observability: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape
- π§ SDK Support: Bifrost is available as a Go package, so you can use it directly in your own applications
Bifrost is built with a modular architecture:
bifrost/
βββ core/ # Core functionality and shared components
β βββ providers/ # Provider-specific implementations
β βββ schemas/ # Interfaces and structs used in bifrost
β βββ bifrost.go # Main Bifrost implementation
β
βββ docs/ # Documentations for Bifrost's configurations and contribution guides
β βββ ...
β
βββ tests/ # All test setups related to /core and /transports
β βββ ...
β
βββ transports/ # Interface layers (HTTP, gRPC, etc.)
β βββ bifrost-http/ # HTTP transport implementation
β βββ ...
β
βββ plugins/ # Plugin Implementations
βββ maxim/
βββ ...
The system uses a provider-agnostic approach with well-defined interfaces to easily extend to new AI providers. All interfaces are defined in core/schemas/
and can be used as a reference for contributions.
There are three ways to use Bifrost - choose the one that fits your needs:
For direct integration into your Go applications. Provides maximum performance and control.
Quick example:
go get github.com/maximhq/bifrost/core
For language-agnostic integration and microservices architecture.
Quick example:
docker pull maximhq/bifrost
docker run -p 8080:8080 \
-v $(pwd)/config.json:/app/config/config.json \
-e OPENAI_API_KEY \
maximhq/bifrost
Replace existing OpenAI/Anthropic APIs without changing your application code.
Quick example:
- base_url = "https://api.openai.com"
+ base_url = "http://localhost:8080/openai"
Bifrost adds virtually zero overhead to your AI requests. In our sustained 5,000 RPS benchmark (see full methodology in docs/benchmarks.md), the gateway added only 11 Β΅s of overhead per request β that's less than 0.001% of a typical GPT-4o response time.
Translation: Your users won't notice Bifrost is there, but you'll sleep better knowing your AI never goes down.
Metric | t3.medium | t3.xlarge | Ξ |
---|---|---|---|
Added latency (Bifrost overhead) | 59 Β΅s | 11 Β΅s | -81 % |
Success rate @ 5 k RPS | 100 % | 100 % | No failed requests |
Avg. queue wait time | 47 Β΅s | 1.67 Β΅s | -96 % |
Avg. request latency (incl. provider) | 2.12 s | 1.61 s | -24 % |
- Perfect Success Rate β 100 % request success rate on both instance types even at 5 k RPS.
- Tiny Total Overhead β < 15 Β΅s additional latency per request on average.
- Efficient Queue Management β just 1.67 Β΅s average wait time on the t3.xlarge test.
- Fast Key Selection β ~10 ns to pick the right weighted API key.
Bifrost is deliberately configurable so you can dial the speed β memory trade-off:
Config Knob | Effect |
---|---|
initial_pool_size |
How many objects are pre-allocated. Higher = faster, more memory |
buffer_size & concurrency |
Queue depth and max parallel workers (can be set per provider) |
Retry / Timeout | Tune aggressiveness for each provider to meet your SLOs |
Choose higher settings (like the t3.xlarge profile above) for raw speed, or lower ones (t3.medium) for reduced memory footprint β or find the sweet spot for your workload.
Need more numbers? Dive into the full benchmark report for breakdowns of every internal stage (JSON marshalling, HTTP call, parsing, etc.), hardware sizing guides and tuning tips.
Everything you need to master Bifrost, from 30-second setup to production-scale deployments.
π I want to get started (2 minutes)
- π Documentation Hub - Your complete roadmap to Bifrost
- π§ Go Package Setup - Direct integration into your Go app
- π HTTP API Setup - Language-agnostic service deployment
- π Drop-in Replacement - Replace OpenAI/Anthropic with zero code changes
π― I want to understand what Bifrost can do
- π Multi-Provider Support - Connect to 8+ AI providers with one API
- π‘οΈ Fallback & Reliability - Never lose a request with automatic failover
- π οΈ MCP Tool Integration - Give your AI external capabilities
- π Plugin Ecosystem - Extend Bifrost with custom middleware
- π Key Management - Rotate API keys without downtime
- π‘ Networking - Proxies, timeouts, and connection tuning
βοΈ I want to deploy this to production
- ποΈ System Architecture - Understand how Bifrost works internally
- π Performance Tuning - Squeeze out every microsecond
- π Production Deployment - Scale to millions of requests
- π§ Complete API Reference - Every endpoint, parameter, and response
- π Error Handling - Troubleshoot like a pro
π± I'm migrating from another tool
- π Migration Guides - Step-by-step migration from OpenAI, Anthropic, LiteLLM
- π Real-World Examples - Production-ready code samples
- β Common Questions - Solutions to frequent issues
π Join our Discord for:
- β Quick setup assistance and troubleshooting
- π‘ Best practices and configuration tips
- π€ Community discussions and support
- π Real-time help with integrations
See our Contributing Guide for detailed information on how to contribute to Bifrost. We welcome contributions of all kindsβwhether it's bug fixes, features, documentation improvements, or new ideas. Feel free to open an issue, and once it's assigned, submit a Pull Request.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Built with β€οΈ by Maxim