Skip to content

likhonshelkh/rodex-api-endpoint

Repository files navigation

🚀 Rodex AI

OpenAI-Compatible API Gateway for Multiple AI Providers

Unified access to the world's leading AI models through one elegant interface

Next.js TypeScript License

Live Demo · Documentation · Contact · Report Bug


✨ What is Rodex AI?

Rodex AI is a powerful, OpenAI-compatible API gateway that unifies multiple AI providers under a single, consistent interface. Built for developers who demand flexibility without complexity.

// One API, Multiple Providers
const response = await fetch('https://api-rodex-cli.vercel.app/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer Rodex',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'rodex', // Auto-selects fastest model
    messages: [{ role: 'user', content: 'Hello, AI!' }]
  })
});

🎯 Key Features

🔄 Drop-in OpenAI Replacement

Use your existing OpenAI-compatible tools and libraries without any code changes.

🚀 Multi-Provider Support

Access Groq, XAI (Grok), Gemini, OpenRouter, and HuggingFace from one endpoint.

⚡ Smart Model Selection

Let Rodex automatically choose the fastest available model for your request.

🎯 Engineering-Optimized

Built specifically for software development tasks with best practices baked in.

🔧 Custom Instructions

Add personalized context to any request without modifying your prompts.

🔐 Simple Authentication

One token (Bearer Rodex) for all providers and models.


🚀 Quick Start

Prerequisites

  • Node.js 18 or higher
  • At least one AI provider API key

Installation

# Clone the repository
git clone https://github.com/zen69coder/rodex-api-endpoint.git
cd rodex-api-endpoint

# Install dependencies
npm install
# or use pnpm/yarn
pnpm install

# Set up environment variables
cp .env.example .env.local

Configuration

Edit .env.local and add your API keys:

# Add at least one provider API key
GROQ_API_KEY=gsk_your_groq_api_key_here
XAI_API_KEY=xai-your_xai_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
OPENROUTER_API_KEY=your_openrouter_api_key_here
HUGGINGFACE_API_KEY=your_huggingface_api_key_here

# Optional: Your deployment URL
NEXT_PUBLIC_SITE_URL=http://localhost:3000

Launch

npm run dev

Open http://localhost:3000 and start building! 🎉


📖 API Reference

Authentication

All requests require the following header:

Authorization: Bearer Rodex

Endpoints

POST /api/v1/chat/completions

Create a chat completion with any supported model.

Request Example:

{
  "model": "rodex-llama-3.3-70b-versatile",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful coding assistant"
    },
    {
      "role": "user",
      "content": "Write a TypeScript function to debounce user input"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 2000,
  "custom_instructions": "Focus on type safety and include JSDoc comments"
}

Response Example:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "rodex-llama-3.3-70b-versatile",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Here's a type-safe debounce function..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 42,
    "completion_tokens": 256,
    "total_tokens": 298
  }
}

GET /api/v1/models

Retrieve all available models based on your configured providers.

GET /api/v1/status

Check real-time provider availability and system health.


🤖 Supported Models

🎯 Auto-Select (Recommended)

Model Description
rodex Automatically selects the fastest available model

⚡ Groq (Ultra-Fast Inference)

Model Description
rodex-llama-3.3-70b-versatile Latest Llama 3.3 70B - Most versatile
rodex-llama-3.1-70b-versatile Llama 3.1 70B - Production ready
rodex-llama-3.1-8b-instant Llama 3.1 8B - Lightning fast
rodex-mixtral-8x7b-32768 Mixtral MoE - Long context
rodex-gemma2-9b-it Gemma 2 9B - Efficient

🔮 XAI Grok

Model Description
rodex-grok-beta Grok's latest model
rodex-grok-vision-beta Grok with vision capabilities

💎 Google Gemini

Model Description
rodex-gemini-2.0-flash-exp Latest Gemini 2.0 Flash
rodex-gemini-1.5-pro Gemini 1.5 Pro - Most capable
rodex-gemini-1.5-flash Gemini 1.5 Flash - Fast
rodex-gemini-1.5-flash-8b Gemini 1.5 Flash 8B - Compact

🌐 OpenRouter (100+ Models)

Model Description
rodex-anthropic/claude-3.5-sonnet Claude 3.5 Sonnet
rodex-openai/gpt-4-turbo GPT-4 Turbo
rodex-{provider}/{model} Any OpenRouter model

🤗 HuggingFace

Model Description
rodex-meta-llama/Meta-Llama-3-8B-Instruct Llama 3 8B Instruct
rodex-{any-huggingface-model} Other HuggingFace models

💻 Usage Examples

cURL

curl https://api-rodex-cli.vercel.app/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer Rodex" \
  -d '{
    "model": "rodex",
    "messages": [
      {
        "role": "user",
        "content": "Create a REST API with authentication"
      }
    ],
    "custom_instructions": "Use Express.js, TypeScript, and JWT"
  }'

Python (OpenAI SDK)

from openai import OpenAI

# Initialize client
client = OpenAI(
    api_key="Rodex",
    base_url="https://api-rodex-cli.vercel.app/api/v1"
)

# Create completion
response = client.chat.completions.create(
    model="rodex-grok-beta",
    messages=[
        {
            "role": "user",
            "content": "Explain the difference between async/await and promises"
        }
    ]
)

print(response.choices[0].message.content)

Node.js (OpenAI SDK)

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'Rodex',
  baseURL: 'https://api-rodex-cli.vercel.app/api/v1'
});

async function generateCode() {
  const completion = await client.chat.completions.create({
    model: 'rodex',
    messages: [
      {
        role: 'user',
        content: 'Create a React component with TypeScript'
      }
    ],
    custom_instructions: 'Use functional components and hooks'
  });

  console.log(completion.choices[0].message.content);
}

generateCode();

JavaScript (Fetch API)

async function chat(message) {
  const response = await fetch(
    'https://api-rodex-cli.vercel.app/api/v1/chat/completions',
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': 'Bearer Rodex'
      },
      body: JSON.stringify({
        model: 'rodex-gemini-2.0-flash-exp',
        messages: [
          { role: 'user', content: message }
        ]
      })
    }
  );

  const data = await response.json();
  return data.choices[0].message.content;
}

// Usage
const answer = await chat('Write a binary search algorithm in Python');
console.log(answer);

🎨 Advanced Features

Smart Model Selection

Use "model": "rodex" to automatically route to the fastest model:

{
  "model": "rodex",
  "messages": [
    {
      "role": "user",
      "content": "Generate a SQL query"
    }
  ]
}

Currently defaults to Groq's Llama 3.3 70B for optimal performance.

Custom Instructions

Inject context into any request without modifying prompts:

{
  "model": "rodex-grok-beta",
  "messages": [
    {
      "role": "user",
      "content": "Build a user authentication system"
    }
  ],
  "custom_instructions": "Use TypeScript, Express, PostgreSQL, and follow SOLID principles. Include error handling and input validation."
}

Engineering-Focused Responses

Rodex is optimized for developers:

  • ✅ Code generation with best practices
  • ✅ Architecture and design patterns
  • ✅ Debugging and performance optimization
  • ✅ Technical documentation
  • ✅ Code reviews and refactoring suggestions

🌐 Deployment

Deploy to Vercel (Recommended)

Click the button below to deploy your own instance:

Deploy with Vercel

Required Environment Variables:

  • GROQ_API_KEY
  • XAI_API_KEY
  • GEMINI_API_KEY
  • OPENROUTER_API_KEY
  • HUGGINGFACE_API_KEY
  • NEXT_PUBLIC_SITE_URL

Manual Deployment

# Build for production
npm run build

# Start production server
npm start

🔧 Configuration

Environment Variables

Variable Required Description
GROQ_API_KEY Optional* Groq API key for ultra-fast inference
XAI_API_KEY Optional* XAI API key for Grok models
GEMINI_API_KEY Optional* Google Gemini API key
OPENROUTER_API_KEY Optional* OpenRouter API key (access 100+ models)
HUGGINGFACE_API_KEY Optional* HuggingFace API key
NEXT_PUBLIC_SITE_URL Optional Your deployment URL

Note: At least one provider API key is required.

Getting API Keys

Provider Get Key
Groq console.groq.com
XAI console.x.ai
Gemini makersuite.google.com
OpenRouter openrouter.ai/keys
HuggingFace huggingface.co/settings/tokens

📁 Project Structure

rodex-api-endpoint/
├── app/
│   ├── api/v1/
│   │   ├── chat/completions/route.ts    # Main completion endpoint
│   │   ├── models/route.ts              # Available models
│   │   └── status/route.ts              # Health check
│   ├── docs/page.tsx                    # API documentation
│   ├── page.tsx                         # Status dashboard
│   ├── layout.tsx                       # Root layout
│   └── globals.css                      # Global styles
├── lib/
│   ├── providers/
│   │   ├── base.ts                      # Base provider interface
│   │   ├── groq.ts                      # Groq implementation
│   │   ├── xai.ts                       # XAI implementation
│   │   ├── gemini.ts                    # Gemini implementation
│   │   ├── openrouter.ts                # OpenRouter implementation
│   │   └── huggingface.ts               # HuggingFace implementation
│   ├── provider-factory.ts              # Provider selection logic
│   ├── rodex-instructions.ts            # System instructions
│   └── types.ts                         # TypeScript definitions
├── public/
│   └── llm.txt                          # LLM discoverability
├── .env.example                         # Environment template
└── README.md                            # You are here!

🐛 Troubleshooting

Authorization required

Solution: Ensure you're including the authorization header:

Authorization: Bearer Rodex
Model must use 'rodex-' prefix

Solution: All model names must start with rodex-:

  • rodex-llama-3.3-70b-versatile
  • rodex
  • llama-3.3-70b-versatile
Provider not configured

Solution:

  1. Check that you've added the API key to .env.local
  2. Verify the key is valid and active
  3. Restart your development server
  4. Check /api/v1/status for provider status
Models not showing up

Solution:

  1. Visit /api/v1/status to see configured providers
  2. Add missing API keys to enable more providers
  3. Ensure environment variables are properly set

🤝 Contributing

Contributions are welcome! Here's how you can help:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Setup

# Install dependencies
npm install

# Run development server
npm run dev

# Type checking
npm run type-check

# Linting
npm run lint

# Build for production
npm run build

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


🙏 Acknowledgments

Built with amazing technologies:


📞 Support & Contact

Need help? Have questions?

Telegram GitHub


⭐ Show Your Support

If you find Rodex AI useful, please consider:

  • Starring the repository
  • 🐛 Reporting bugs
  • 💡 Suggesting new features
  • 🔗 Sharing with other developers

Made with ❤️ by @likhonsheikh

⬆ Back to Top

About

OpenAI-Compatible API Gateway for Multiple AI Providers

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •