Unified access to the world's leading AI models through one elegant interface
Rodex AI is a powerful, OpenAI-compatible API gateway that unifies multiple AI providers under a single, consistent interface. Built for developers who demand flexibility without complexity.
// One API, Multiple Providers
const response = await fetch('https://api-rodex-cli.vercel.app/api/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer Rodex',
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'rodex', // Auto-selects fastest model
messages: [{ role: 'user', content: 'Hello, AI!' }]
})
});
Use your existing OpenAI-compatible tools and libraries without any code changes. Access Groq, XAI (Grok), Gemini, OpenRouter, and HuggingFace from one endpoint. Let Rodex automatically choose the fastest available model for your request. |
Built specifically for software development tasks with best practices baked in. Add personalized context to any request without modifying your prompts. One token ( |
- Node.js 18 or higher
- At least one AI provider API key
# Clone the repository
git clone https://github.com/zen69coder/rodex-api-endpoint.git
cd rodex-api-endpoint
# Install dependencies
npm install
# or use pnpm/yarn
pnpm install
# Set up environment variables
cp .env.example .env.local
Edit .env.local
and add your API keys:
# Add at least one provider API key
GROQ_API_KEY=gsk_your_groq_api_key_here
XAI_API_KEY=xai-your_xai_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
OPENROUTER_API_KEY=your_openrouter_api_key_here
HUGGINGFACE_API_KEY=your_huggingface_api_key_here
# Optional: Your deployment URL
NEXT_PUBLIC_SITE_URL=http://localhost:3000
npm run dev
Open http://localhost:3000 and start building! 🎉
All requests require the following header:
Authorization: Bearer Rodex
Create a chat completion with any supported model.
Request Example:
{
"model": "rodex-llama-3.3-70b-versatile",
"messages": [
{
"role": "system",
"content": "You are a helpful coding assistant"
},
{
"role": "user",
"content": "Write a TypeScript function to debounce user input"
}
],
"temperature": 0.7,
"max_tokens": 2000,
"custom_instructions": "Focus on type safety and include JSDoc comments"
}
Response Example:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677652288,
"model": "rodex-llama-3.3-70b-versatile",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Here's a type-safe debounce function..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 42,
"completion_tokens": 256,
"total_tokens": 298
}
}
Retrieve all available models based on your configured providers.
Check real-time provider availability and system health.
Model | Description |
---|---|
rodex |
Automatically selects the fastest available model |
Model | Description |
---|---|
rodex-llama-3.3-70b-versatile |
Latest Llama 3.3 70B - Most versatile |
rodex-llama-3.1-70b-versatile |
Llama 3.1 70B - Production ready |
rodex-llama-3.1-8b-instant |
Llama 3.1 8B - Lightning fast |
rodex-mixtral-8x7b-32768 |
Mixtral MoE - Long context |
rodex-gemma2-9b-it |
Gemma 2 9B - Efficient |
Model | Description |
---|---|
rodex-grok-beta |
Grok's latest model |
rodex-grok-vision-beta |
Grok with vision capabilities |
Model | Description |
---|---|
rodex-gemini-2.0-flash-exp |
Latest Gemini 2.0 Flash |
rodex-gemini-1.5-pro |
Gemini 1.5 Pro - Most capable |
rodex-gemini-1.5-flash |
Gemini 1.5 Flash - Fast |
rodex-gemini-1.5-flash-8b |
Gemini 1.5 Flash 8B - Compact |
Model | Description |
---|---|
rodex-anthropic/claude-3.5-sonnet |
Claude 3.5 Sonnet |
rodex-openai/gpt-4-turbo |
GPT-4 Turbo |
rodex-{provider}/{model} |
Any OpenRouter model |
Model | Description |
---|---|
rodex-meta-llama/Meta-Llama-3-8B-Instruct |
Llama 3 8B Instruct |
rodex-{any-huggingface-model} |
Other HuggingFace models |
curl https://api-rodex-cli.vercel.app/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer Rodex" \
-d '{
"model": "rodex",
"messages": [
{
"role": "user",
"content": "Create a REST API with authentication"
}
],
"custom_instructions": "Use Express.js, TypeScript, and JWT"
}'
from openai import OpenAI
# Initialize client
client = OpenAI(
api_key="Rodex",
base_url="https://api-rodex-cli.vercel.app/api/v1"
)
# Create completion
response = client.chat.completions.create(
model="rodex-grok-beta",
messages=[
{
"role": "user",
"content": "Explain the difference between async/await and promises"
}
]
)
print(response.choices[0].message.content)
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'Rodex',
baseURL: 'https://api-rodex-cli.vercel.app/api/v1'
});
async function generateCode() {
const completion = await client.chat.completions.create({
model: 'rodex',
messages: [
{
role: 'user',
content: 'Create a React component with TypeScript'
}
],
custom_instructions: 'Use functional components and hooks'
});
console.log(completion.choices[0].message.content);
}
generateCode();
async function chat(message) {
const response = await fetch(
'https://api-rodex-cli.vercel.app/api/v1/chat/completions',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer Rodex'
},
body: JSON.stringify({
model: 'rodex-gemini-2.0-flash-exp',
messages: [
{ role: 'user', content: message }
]
})
}
);
const data = await response.json();
return data.choices[0].message.content;
}
// Usage
const answer = await chat('Write a binary search algorithm in Python');
console.log(answer);
Use "model": "rodex"
to automatically route to the fastest model:
{
"model": "rodex",
"messages": [
{
"role": "user",
"content": "Generate a SQL query"
}
]
}
Currently defaults to Groq's Llama 3.3 70B for optimal performance.
Inject context into any request without modifying prompts:
{
"model": "rodex-grok-beta",
"messages": [
{
"role": "user",
"content": "Build a user authentication system"
}
],
"custom_instructions": "Use TypeScript, Express, PostgreSQL, and follow SOLID principles. Include error handling and input validation."
}
Rodex is optimized for developers:
- ✅ Code generation with best practices
- ✅ Architecture and design patterns
- ✅ Debugging and performance optimization
- ✅ Technical documentation
- ✅ Code reviews and refactoring suggestions
Click the button below to deploy your own instance:
Required Environment Variables:
GROQ_API_KEY
XAI_API_KEY
GEMINI_API_KEY
OPENROUTER_API_KEY
HUGGINGFACE_API_KEY
NEXT_PUBLIC_SITE_URL
# Build for production
npm run build
# Start production server
npm start
Variable | Required | Description |
---|---|---|
GROQ_API_KEY |
Optional* | Groq API key for ultra-fast inference |
XAI_API_KEY |
Optional* | XAI API key for Grok models |
GEMINI_API_KEY |
Optional* | Google Gemini API key |
OPENROUTER_API_KEY |
Optional* | OpenRouter API key (access 100+ models) |
HUGGINGFACE_API_KEY |
Optional* | HuggingFace API key |
NEXT_PUBLIC_SITE_URL |
Optional | Your deployment URL |
Note: At least one provider API key is required.
Provider | Get Key |
---|---|
Groq | console.groq.com |
XAI | console.x.ai |
Gemini | makersuite.google.com |
OpenRouter | openrouter.ai/keys |
HuggingFace | huggingface.co/settings/tokens |
rodex-api-endpoint/
├── app/
│ ├── api/v1/
│ │ ├── chat/completions/route.ts # Main completion endpoint
│ │ ├── models/route.ts # Available models
│ │ └── status/route.ts # Health check
│ ├── docs/page.tsx # API documentation
│ ├── page.tsx # Status dashboard
│ ├── layout.tsx # Root layout
│ └── globals.css # Global styles
├── lib/
│ ├── providers/
│ │ ├── base.ts # Base provider interface
│ │ ├── groq.ts # Groq implementation
│ │ ├── xai.ts # XAI implementation
│ │ ├── gemini.ts # Gemini implementation
│ │ ├── openrouter.ts # OpenRouter implementation
│ │ └── huggingface.ts # HuggingFace implementation
│ ├── provider-factory.ts # Provider selection logic
│ ├── rodex-instructions.ts # System instructions
│ └── types.ts # TypeScript definitions
├── public/
│ └── llm.txt # LLM discoverability
├── .env.example # Environment template
└── README.md # You are here!
Authorization required
Solution: Ensure you're including the authorization header:
Authorization: Bearer Rodex
Model must use 'rodex-' prefix
Solution: All model names must start with rodex-
:
- ✅
rodex-llama-3.3-70b-versatile
- ✅
rodex
- ❌
llama-3.3-70b-versatile
Provider not configured
Solution:
- Check that you've added the API key to
.env.local
- Verify the key is valid and active
- Restart your development server
- Check
/api/v1/status
for provider status
Models not showing up
Solution:
- Visit
/api/v1/status
to see configured providers - Add missing API keys to enable more providers
- Ensure environment variables are properly set
Contributions are welcome! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
# Install dependencies
npm install
# Run development server
npm run dev
# Type checking
npm run type-check
# Linting
npm run lint
# Build for production
npm run build
This project is licensed under the MIT License - see the LICENSE file for details.
Built with amazing technologies:
- Next.js - The React Framework
- shadcn/ui - Beautiful UI components
- Groq - Ultra-fast AI inference
- XAI - Grok models
- Google Gemini - Advanced AI models
- OpenRouter - Access to 100+ models
- HuggingFace - Open-source AI community
If you find Rodex AI useful, please consider:
- ⭐ Starring the repository
- 🐛 Reporting bugs
- 💡 Suggesting new features
- 🔗 Sharing with other developers
Made with ❤️ by @likhonsheikh