Skip to content

litongjava/go-llm-proxy

Repository files navigation

go-llm-proxy

Introduction

go-llm-proxy is a service designed to proxy OpenAI API requests. It receives OpenAI API requests from clients and forwards them to the official OpenAI servers. This project solely acts as a proxy and does not store any data.

Note: You must obtain your own OpenAI API key.

Quick Start

After deployment, the service is accessible at: https://llm.maxkb.ai

Example Request

Use curl to send a request:

curl --location --request POST 'https://llm.maxkb.ai/openai/v1/chat/completions' \
--header 'Authorization: Bearer YOUR_OPENAI_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
    "messages": [
        {
            "role": "system",
            "content": "Just say hi"
        }
    ],
    "model": "gpt-4o-mini",
    "stream": true
}'

Important: Replace YOUR_OPENAI_API_KEY with your actual API key.

Build

Compile the project using Go:

go build

Run

After building, run the generated executable:

./go-llm-proxy

Test

Use curl to test whether the proxy service correctly forwards requests and displays the returned data:

curl --location --request POST 'http://127.0.0.1:8888/openai/v1/chat/completions' \
--header 'Authorization: Bearer YOUR_OPENAI_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
    "messages": [
        {
            "role": "system",
            "content": "Just say hi"
        }
    ],
    "model": "gpt-3.5-turbo",
    "stream": true
}'

Example Response

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Hi"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" How"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" can"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" I"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" assist"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" you"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" today"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}

data:{"id":"chatcmpl-9P3fvvyk4IuCprCnvMytoKN8UtskC","object":"chat.completion.chunk","created":1715759355,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}

data:[DONE]

Supported Models

Currently supported models include: OpenAI Models

Please configure the specific model you wish to connect to via the proxy as needed.

Contribution

Contributions from the community are welcome! Whether it's adding new features or improving existing ones, feel free to submit Pull Requests to our GitHub repository.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Security Notice

Do not expose your OpenAI API key in public repositories or shared environments. Ensure that your API key is securely stored to prevent unauthorized use.

About

go-llm-proxy

Resources

License

Stars

Watchers

Forks

Packages

No packages published