Skip to content

cod3vil/gemini-pool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gemini API Pool

简体中文

Overview

This project provides a lightweight, high-performance proxy server written in Rust. It exposes an OpenAI-compatible API endpoint (/v1/chat/completions) and intelligently routes requests to the Google Gemini API. It manages a pool of Gemini API keys, rotating them for each request to distribute the load and manage API quotas effectively.

The entire application is containerized with Docker for easy, one-click deployment.

PS: The entire application is generated by Claude Code except this line, feel free to use it.

Features

  • OpenAI API Compatibility: Drop-in replacement for services that use the OpenAI Chat Completions API format.
  • API Key Management: Dynamic creation, editing, and deletion of API keys with usage tracking.
  • Web Management Interface: Modern, tech-styled web dashboard for managing API keys and monitoring usage.
  • Usage Analytics: Real-time tracking of requests, input/output tokens, and API key statistics.
  • Secure Authentication: JWT-based admin authentication with role-based access control.
  • API Key Rotation: Automatically rotates Gemini API keys from a predefined pool on each request.
  • Dynamic Model Selection: Uses the model field from the request payload to target different Gemini models (e.g., gemini-1.5-flash, gemini-1.5-pro).
  • High Performance: Built with Rust and Axum for asynchronous, fast, and reliable performance.
  • Easy Deployment: One-command deployment using Docker and a simple shell script.
  • Database Storage: SQLite database for persistent storage of API keys and usage logs.

Prerequisites

Getting Started

1. Clone the Repository

git clone https://github.com/cod3vil/gemini-pool.git
cd gemini-pool

2. Configuration

The application requires a .env file to store your Gemini API keys.

  1. Create the .env file by copying the provided example:

    cp gemini-pool/.env.example gemini-pool/.env
  2. Edit gemini-pool/.env and configure the required settings:

    # gemini-pool/.env
    
    # Put your Gemini API keys here, separated by commas.
    GEMINI_API_KEYS=your_key_1,your_key_2,your_key_3
    
    # Admin authentication for web management interface
    ADMIN_USERNAME=admin
    ADMIN_PASSWORD=your_secure_admin_password
    
    # JWT secret for admin session management (change this in production)
    JWT_SECRET=your_jwt_secret_change_in_production
    
    # Database URL (SQLite by default)
    DATABASE_URL=sqlite:./gemini_pool.db
    
    # The address to bind the server to (inside the container).
    # This must be 0.0.0.0:8080 to be accessible from the host via Docker.
    LISTEN_ADDR=0.0.0.0:8080

3. Deployment with Docker (Recommended)

The included deploy.sh script handles everything from building the Docker image to running the container.

  1. Make the script executable:

    chmod +x deploy.sh
  2. Run the script:

    • To run on the default port 8080:
      ./deploy.sh
    • To run on a custom port (e.g., 9000):
      ./deploy.sh 9000

The script will build the image, stop any old containers, and start a new one in the background.

  • To view logs: docker logs -f gemini-pool-container
  • To stop the service: docker stop gemini-pool-container

Web Management Interface

After deployment, you can access the web management interface to manage API keys and monitor usage:

Accessing the Dashboard

  1. Home Page: Navigate to http://127.0.0.1:8080/ - Bilingual (中文/English) features overview with direct links to admin panel
  2. Admin Interface: Access http://127.0.0.1:8080/admin - Management interface home
  3. Login Page: http://127.0.0.1:8080/admin/login.html
  4. Management Dashboard: http://127.0.0.1:8080/admin/management.html
  5. Use your admin credentials (from your .env file)

API Endpoints Structure

All admin API endpoints are now organized under /admin/api/:

  • Authentication:
    • POST /admin/api/auth/login - Admin login
    • GET /admin/api/auth/verify - Token verification
  • Dashboard: GET /admin/api/dashboard - Statistics
  • API Key Management:
    • GET /admin/api/api-keys - List API keys
    • POST /admin/api/api-keys - Create API key
    • GET /admin/api/api-keys/{id} - Get specific API key
    • PUT /admin/api/api-keys/{id} - Update API key
    • DELETE /admin/api/api-keys/{id} - Delete API key

Features

  • 🎨 Modern Tech-Styled Interface: Cyberpunk-inspired design with matrix rain background effects
  • 🌍 Bilingual Support: Complete Chinese/English interface with persistent language preferences
  • 📊 Real-time Dashboard: View total API keys, requests, tokens, and active keys
  • 🔑 API Key Management:
    • Create new API keys (auto-generated or custom)
    • Edit key names and toggle active status
    • Delete unused keys
    • View detailed usage statistics per key
  • 📈 Usage Analytics: Track input/output tokens and request counts
  • 🔒 Secure Authentication: JWT-based session management
  • 📱 Responsive Design: Works on desktop and mobile devices

Client API Key Authentication

When using the API endpoints (/v1/chat/completions, /v1/models), clients must include an API key created through the web interface:

curl -X POST http://127.0.0.1:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_client_api_key" \
-d '{...}'

API Usage

Send a POST request to the /v1/chat/completions endpoint. The request body should be in the OpenAI Chat Completions format.

Example Request

Here is an example using curl to interact with the service. You can specify any supported Gemini model in the model field.

curl -X POST http://127.0.0.1:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_client_api_key" \
-d '{
  "model": "gemini-1.5-flash",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Hello! What is the capital of China?"
    }
  ]
}'

Note: Replace your_client_api_key with an API key created through the web management interface.

The server will forward this request to the Gemini API using one of your keys and return a response in the OpenAI format.

List Models

To see the list of available models supported by this proxy, send a GET request to the /v1/models endpoint.

curl -H "Authorization: Bearer your_client_api_key" \
     http://127.0.0.1:8080/v1/models

Production Deployment with Nginx

For production environments, it's recommended to use nginx as a reverse proxy in front of the Gemini Pool service. This provides additional security, SSL termination, and load balancing capabilities.

Simple Nginx Configuration

Thanks to the reorganized route structure with /admin/api/* paths and a dedicated index page, you can use a very simple nginx configuration:

server {
    listen 80;
    server_name your-domain.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name your-domain.com;
    
    # SSL Configuration
    ssl_certificate /path/to/your/fullchain.pem;
    ssl_certificate_key /path/to/your/private.key;
    
    # Security Headers
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    
    # Proxy all requests to the application
    location ^~ / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # For long-running requests
        proxy_read_timeout 60s;
        client_max_body_size 10M;
    }
}

About

a lightweight, high-performance gemini api pool

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published