Skip to content

gpt4thewin/docker-nginx-openai-api-cache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Docker Nginx OpenAI API Cache Reverse proxy

This project is a simple Docker Nginx project that serves as a cache for the OpenAI API.

nginx here is preconfigured to work on OpenAI API.

Features:

  • Works with any client that allows you to configure the server address (as it acts as a reverse proxy)
  • Caches the response of the support endpoints. The key of cache is built from the request uri and body
  • Returns an "X-Cache-Status" header indicating whether the response was served from cache or not

Supported endpoints:

  • POST /v1/chat/completions
  • POST /v1/completions
  • POST /v1/edits
  • POST /v1/embeddings
  • POST /v1/moderations
  • POST /v1/answers

endpoints deprecated by OpenAI :

  • POST /v1/engines/*/chat/completions
  • POST /v1/engines/*/completions
  • POST /v1/engines/*/edits
  • POST /v1/engines/*/embeddings
  • POST /v1/engines/*/moderations
  • POST /v1/engines/*/answers

Getting Started

Prerequisites

  • Docker
  • Docker compose

Installation

  1. Clone the repository:
git clone https://github.com/gpt4thewin/docker-nginx-openai-api-cache.git
cd docker-nginx-openai-api-cache
  1. Start the container:
docker-compose up -d
  1. Test the server

Set your credentials:

OPENAI_API_KEY="...."

Run this 2 times or more:

curl -s -o /dev/null -w "%{http_code}" http://localhost:81/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "Hello there !"
    }
  ],
  "temperature": 0,
  "max_tokens": 228,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0
}'
  1. Check the logs
docker-compose logs

The last lines should show something like this

openai-cache-proxy  | 172.28.0.1 - - [29/Feb/2024:19:59:49 +0000] "POST /v1/chat/completions HTTP/1.1" 200 494 "-" "curl/7.80.0" Cache: MISS
openai-cache-proxy  | 172.28.0.1 - - [29/Feb/2024:19:59:52 +0000] "POST /v1/chat/completions HTTP/1.1" 200 494 "-" "curl/7.80.0" Cache: HIT
  1. Stop the container:
docker-compose down

Usage

Set your client's API server address to http://localhost:81/v1 Once the containers are running, you can use the OpenAI API through the cache by sending requests to the supported URIs.

URIs that are supported will be forwarded, unless they are cached. URIs that are not supported will be forwarded normally.

Configuration

The cache is configured using the nginx.conf. You can modify this file to change the cache settings or add additional URIs.

Contributing

Contributions are welcome! Please submit a pull request or open an issue if you encounter any problems or have suggestions for improvements.

About

A nginx and docker built reverse proxy server to cache the slow expensive requests to the openai api.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published