Skip to content

Resaqulyubi/ai-api-proxy

Repository files navigation

AI API Proxy Backend

A secure, production-ready Node.js/Express backend that acts as a proxy for AI services (OpenRouter, OpenAI, Anthropic). Designed to be consumed by multiple frontend applications with built-in authentication, rate limiting, and usage tracking.

🎯 Demo App

Live Demo: Flutter App using this Integration

See this API proxy in action with a real Flutter application!

β˜• Support

If you find this project helpful, consider supporting me:

Buy Me a Coffee

πŸš€ Features

  • Multi-Provider Support: OpenRouter, OpenAI, Anthropic (easily extensible)
  • Secure API Key Management: Keep your AI API keys safe on the server
  • Authentication: API key-based authentication for clients
  • Rate Limiting: Configurable rate limits per client/IP
  • CORS Support: Configurable allowed origins
  • Request Validation: Comprehensive input validation
  • Logging: Winston-based logging with rotation
  • Error Handling: Centralized error handling
  • Health Checks: Built-in health and status endpoints
  • Docker Support: Ready for containerized deployment
  • Production Ready: Security best practices with Helmet, compression, etc.

πŸ“‹ Prerequisites

  • Node.js >= 18.0.0
  • npm >= 9.0.0
  • OpenRouter API key (or other AI provider keys)

πŸ› οΈ Installation

1. Clone or Copy Repository

git clone <your-repo-url>
cd ai-api-proxy-backend

2. Install Dependencies

npm install

3. Configure Environment

# Copy example environment file
cp .env.example .env

# Edit .env with your configuration
nano .env

4. Generate Client API Keys

Generate secure API keys for your clients:

node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"

Add the generated keys to .env:

CLIENT_API_KEYS=generated_key_1,generated_key_2,generated_key_3

βš™οΈ Configuration

Edit .env file with your settings:

# Server
PORT=3000
NODE_ENV=production

# OpenRouter (Required)
OPENROUTER_API_KEY=your_openrouter_key_here
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1

# Client Authentication (Required)
CLIENT_API_KEYS=key1,key2,key3

# Rate Limiting
RATE_LIMIT_WINDOW_MS=900000
RATE_LIMIT_MAX_REQUESTS=100

# CORS
ALLOWED_ORIGINS=https://yourdomain.com,https://app.yourdomain.com

# Logging
LOG_LEVEL=info

# App Info
APP_NAME=AI API Proxy
APP_REFERER=https://yourdomain.com

πŸš€ Running the Application

Development Mode

npm run dev

Production Mode

npm start

Using Docker

# Build and run with Docker Compose
docker-compose up -d

# Or build and run manually
docker build -t ai-api-proxy .
docker run -p 3000:3000 --env-file .env ai-api-proxy

πŸ“‘ API Endpoints

Public Endpoints

Health Check

GET /health

Response:

{
  "status": "healthy",
  "timestamp": "2025-01-01T00:00:00.000Z",
  "uptime": 123.45,
  "environment": "production"
}

API Info

GET /api/info

Returns API documentation and available endpoints.

Detailed Status

GET /api/status

Returns detailed service status and configuration.

Protected Endpoints (Require API Key)

All protected endpoints require authentication via:

  • Header: X-API-Key: your_client_api_key
  • OR Header: Authorization: Bearer your_client_api_key

Simple Text Generation

POST /api/ai/generate

Request body:

{
  "prompt": "What is artificial intelligence?",
  "model": "google/gemini-2.0-flash-exp:free",
  "temperature": 0.7,
  "max_tokens": 500
}

Response:

{
  "success": true,
  "text": "Artificial intelligence (AI) is...",
  "model": "google/gemini-2.0-flash-exp:free",
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 50,
    "total_tokens": 60
  }
}

Chat Completion (Advanced)

POST /api/ai/completion

Request body:

{
  "provider": "openrouter",
  "model": "google/gemini-2.0-flash-exp:free",
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "Hello!" }
  ],
  "temperature": 0.7,
  "max_tokens": 500
}

Generate Embeddings

POST /api/ai/embedding

Request body:

{
  "text": "Text to generate embeddings for",
  "provider": "openrouter",
  "model": "text-embedding-ada-002"
}

Get Available Models

GET /api/ai/models?provider=openrouter

πŸ” Authentication

This proxy uses API key authentication. Clients must include their API key in every request:

# Using curl
curl -X POST http://localhost:3000/api/ai/generate \
  -H "X-API-Key: your_client_api_key" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Hello world"}'

# Or with Authorization header
curl -X POST http://localhost:3000/api/ai/generate \
  -H "Authorization: Bearer your_client_api_key" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Hello world"}'

πŸ“± Frontend Integration

Flutter/Dart Example

class AIProxyService {
  static const String baseUrl = 'https://your-proxy-domain.com';
  static const String apiKey = 'your_client_api_key';
  
  final http.Client client;

  AIProxyService(this.client);

  Future<String> generateText(String prompt) async {
    final response = await client.post(
      Uri.parse('$baseUrl/api/ai/generate'),
      headers: {
        'X-API-Key': apiKey,
        'Content-Type': 'application/json',
      },
      body: jsonEncode({
        'prompt': prompt,
        'model': 'google/gemini-2.0-flash-exp:free',
        'temperature': 0.7,
        'max_tokens': 500,
      }),
    );

    if (response.statusCode == 200) {
      final data = jsonDecode(response.body);
      return data['text'];
    } else {
      throw Exception('Failed to generate text');
    }
  }
}

JavaScript/TypeScript Example

class AIProxyClient {
  constructor(baseUrl, apiKey) {
    this.baseUrl = baseUrl;
    this.apiKey = apiKey;
  }

  async generateText(prompt) {
    const response = await fetch(`${this.baseUrl}/api/ai/generate`, {
      method: 'POST',
      headers: {
        'X-API-Key': this.apiKey,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        prompt,
        model: 'google/gemini-2.0-flash-exp:free',
        temperature: 0.7,
        max_tokens: 500,
      }),
    });

    if (!response.ok) {
      throw new Error('Failed to generate text');
    }

    const data = await response.json();
    return data.text;
  }
}

// Usage
const client = new AIProxyClient('https://your-proxy-domain.com', 'your_client_api_key');
const result = await client.generateText('Hello, world!');
console.log(result);

Python Example

import requests
import json

class AIProxyClient:
    def __init__(self, base_url, api_key):
        self.base_url = base_url
        self.api_key = api_key
    
    def generate_text(self, prompt):
        response = requests.post(
            f"{self.base_url}/api/ai/generate",
            headers={
                "X-API-Key": self.api_key,
                "Content-Type": "application/json"
            },
            json={
                "prompt": prompt,
                "model": "google/gemini-2.0-flash-exp:free",
                "temperature": 0.7,
                "max_tokens": 500
            }
        )
        response.raise_for_status()
        return response.json()["text"]

# Usage
client = AIProxyClient("https://your-proxy-domain.com", "your_client_api_key")
result = client.generate_text("Hello, world!")
print(result)

πŸ”’ Security Best Practices

  1. Never expose your AI provider API keys - Keep them only on the backend
  2. Use HTTPS in production - Always use SSL/TLS certificates
  3. Rotate client API keys regularly - Generate new keys periodically
  4. Monitor usage - Check logs for suspicious activity
  5. Set appropriate rate limits - Protect against abuse
  6. Restrict CORS origins - Only allow trusted domains
  7. Use environment variables - Never commit secrets to version control

πŸ“Š Monitoring & Logging

Logs are stored in the logs/ directory:

  • error.log - Error level logs
  • combined.log - All logs

In production, consider integrating with:

  • ELK Stack (Elasticsearch, Logstash, Kibana)
  • Datadog
  • New Relic
  • CloudWatch (AWS)

🚒 Deployment

Deploy to VPS/Cloud

  1. Set up your server (Ubuntu/Debian recommended)
  2. Install Node.js 18+
  3. Clone repository
  4. Configure .env
  5. Use PM2 for process management:
npm install -g pm2
pm2 start src/server.js --name ai-proxy
pm2 save
pm2 startup

Deploy with Docker

# Build image
docker build -t ai-api-proxy .

# Run container
docker run -d -p 3000:3000 --env-file .env --name ai-proxy ai-api-proxy

# Or use Docker Compose
docker-compose up -d

Deploy to Cloud Platforms

  • Heroku: Use Procfile with web: node src/server.js
  • AWS ECS/EKS: Use provided Dockerfile
  • Google Cloud Run: Deploy container directly
  • DigitalOcean App Platform: Auto-detects Node.js
  • Railway: Connect git repository
  • Render: Deploy from GitHub

πŸ§ͺ Testing

# Run tests
npm test

# Test health endpoint
curl http://localhost:3000/health

# Test with authentication
curl -X POST http://localhost:3000/api/ai/generate \
  -H "X-API-Key: your_client_key" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Test prompt"}'

🀝 Contributing

This is a generic, reusable backend. Contributions welcome:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

πŸ“ License

MIT License - feel free to use in your projects

πŸ†˜ Support

For issues, questions, or contributions:

  • Create an issue on GitHub
  • Check logs in logs/ directory
  • Review /api/info endpoint for documentation

πŸ”„ Versioning

Current version: 1.0.0

See CHANGELOG.md for version history.

🎯 Roadmap

  • Add OpenAI Assistants API support
  • Implement usage tracking and analytics
  • Add webhook support for async operations
  • Create admin dashboard
  • Add Redis for rate limiting in distributed systems
  • Implement request caching
  • Add more AI providers (Anthropic Claude, etc.)
  • Create SDK packages for popular languages

Built with ❀️ for the AI community

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published