Skip to content

Commit

Permalink
Blacklist and whitelist support (#9)
Browse files Browse the repository at this point in the history
* Blacklist and whitelist support

* Update error message

* Update dockerfile

* fix pytests
  • Loading branch information
KenyonY committed Apr 30, 2023
1 parent a621097 commit b80d41c
Show file tree
Hide file tree
Showing 11 changed files with 146 additions and 102 deletions.
4 changes: 3 additions & 1 deletion .env
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
OPENAI_API_KEY=""
OPENAI_BASE_URL="https://api.openai.com"
LOG_CHAT=true
ROUTE_PREFIX=""
ROUTE_PREFIX=""
IP_WHITELIST=""
IP_BLACKLIST=""
13 changes: 12 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.PHONY: start build push run down test twine
.PHONY: start build push run down test twine log

image := "beidongjiedeguang/openai-forward:latest"
container := "openai-forward-container"
Expand All @@ -14,6 +14,17 @@ start:
--env "OPENAI_API_KEY=" \
$(image)

start-win:
docker run -itd \
--name $(container) \
--env "LOG_CHAT=true" \
--env "OPENAI_API_KEY=" \
-p 8000:8000 \
$(image)

exec:
docker exec -it $(container) bash

log:
docker logs -f $(container)

Expand Down
107 changes: 64 additions & 43 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
</h1>
<p align="center">
<b> OpenAI API 接口转发服务 <br/>
The fastest way to deploy openai api forward proxy </b>
The fastest way to deploy openai api forwarding </b>
</p>

<p align="center">
Expand All @@ -34,29 +34,30 @@
</p>
This project is designed to solve the problem of some regions being unable to directly access OpenAI. The service is deployed on a server that can access the OpenAI API, and OpenAI requests are forwarded through the service, i.e. a reverse proxy service is set up.

Test access: https://caloi.top/v1/chat/completions is equivalent to https://api.openai.com/v1/chat/completions
Test access: https://caloi.top/v1/chat/completions is equivalent to https://api.openai.com/v1/chat/completions

# Table of Contents
# Table of Contents

- [Features](#Features)
- [Usage](#Usage)
- [Service Deployment](#Service-Deployment)
- [Service Usage](#Service-Usage)
- [Features](#Features)
- [Usage](#Usage)
- [Service Deployment](#Service-Deployment)
- [Service Usage](#Service-Usage)
- [Configuration](#Configuration)

# Features
- [x] Supports forwarding of all OpenAI interfaces
- [x] Supports request IP verification
- [x] Supports streaming forwarding
- [x] Supports default API key
- [x] pip installation and deployment
- [x] Docker deployment
# Features

- [x] Supports forwarding of all OpenAI interfaces
- [x] Supports request IP verification
- [x] Supports streaming forwarding
- [x] Supports default API key
- [x] pip installation and deployment
- [x] Docker deployment
- [x] Support for multiple worker processes
- [x] Support for specifying the forwarding routing prefix.
- [ ] Chat content security: Chat content streaming filtering

# Usage
> Here, the proxy address set up by the individual, https://caloi.top, is used as an example
# Usage

> Here, the proxy address set up by the individual, https://caloi.top, is used as an example
### Using in a module

Expand All @@ -72,13 +73,15 @@ Test access: https://caloi.top/v1/chat/completions is equivalent to https://api.
```

**Python**

```diff
import openai
+ openai.api_base = "https://caloi.top"
openai.api_key = "sk-******"
```

### Image Generation (DALL-E):

```bash
curl --location 'https://caloi.top/v1/images/generations' \
--header 'Authorization: Bearer sk-******' \
Expand All @@ -90,53 +93,76 @@ curl --location 'https://caloi.top/v1/images/generations' \
}'
```

### [chatgpt-web](https://github.com/Chanzhaoyu/chatgpt-web)
Modify the `OPENAI_API_BASE_URL` in [Docker Compose](https://github.com/Chanzhaoyu/chatgpt-web#docker-compose) to the address of the proxy service we set up:
### [chatgpt-web](https://github.com/Chanzhaoyu/chatgpt-web)

Modify the `OPENAI_API_BASE_URL` in [Docker Compose](https://github.com/Chanzhaoyu/chatgpt-web#docker-compose) to the
address of the proxy service we set up:

```bash
OPENAI_API_BASE_URL: https://caloi.top
```

### [ChatGPT-Next-Web](https://github.com/Yidadaa/ChatGPT-Next-Web)
Replace `BASE_URL` in the docker startup command with the address of the proxy service we set up:
### [ChatGPT-Next-Web](https://github.com/Yidadaa/ChatGPT-Next-Web)

Replace `BASE_URL` in the docker startup command with the address of the proxy service we set up:

```bash
docker run -d -p 3000:3000 -e OPENAI_API_KEY="sk-******" -e CODE="<your password>" -e BASE_URL="caloi.top" yidadaa/chatgpt-next-web
```

# Service Deployment
Two service deployment methods are provided, choose one
# Service Deployment

Two service deployment methods are provided, choose one

## Use `pip` (recommended)

**Installation**

## Use `pip`
**Installation**
```bash
pip install openai-forward
```
**Run forwarding service**
The port number can be specified through `--port`, which defaults to `8000`, and the number of worker processes can be specified through `--workers`, which defaults to `1`.

**Run forwarding service**
The port number can be specified through `--port`, which defaults to `8000`, and the number of worker processes can be
specified through `--workers`, which defaults to `1`.

```bash
openai_forward run --port=9999 --workers=1
```
The service is now set up, and the usage is to replace `https://api.openai.com` with the port number of the service `http://{ip}:{port}`.

Of course, OPENAI_API_KEY can also be passed in as an environment variable as the default API key, so that the client does not need to pass in the Authorization in the header when requesting the relevant route.
Startup command with default API key:
The service is now set up, and the usage is to replace `https://api.openai.com` with the port number of the
service `http://{ip}:{port}`.

Of course, OPENAI_API_KEY can also be passed in as an environment variable as the default API key, so that the client
does not need to pass in the Authorization in the header when requesting the relevant route.
Startup command with default API key:

```bash
OPENAI_API_KEY="sk-xxx" openai_forward run --port=9999 --workers=1
```
Note: If both the default API key and the API key passed in the request header exist, the API key in the request header will override the default API key.

## Use Docker (recommended)
Note: If both the default API key and the API key passed in the request header exist, the API key in the request header
will override the default API key.

## Use Docker

```bash
docker run --name="openai-forward" -d -p 9999:8000 beidongjiedeguang/openai-forward:latest
```
The 9999 port of the host is mapped, and the service can be accessed through `http://{ip}:9999`.
Note: You can also pass in the environment variable OPENAI_API_KEY=sk-xxx as the default API key in the startup command.

# Service Usage
Simply replace the OpenAI API address with the address of the service we set up, such as:
The 9999 port of the host is mapped, and the service can be accessed through `http://{ip}:9999`.
Note: You can also pass in the environment variable OPENAI_API_KEY=sk-xxx as the default API key in the startup command.

# Service Usage

Simply replace the OpenAI API address with the address of the service we set up, such as:

```bash
https://api.openai.com/v1/chat/completions
```
Replace with

Replace with

```bash
http://{ip}:{port}/v1/chat/completions
```
Expand All @@ -159,10 +185,5 @@ refer to the `.env` file in the project root directory
| OPENAI_BASE_URL | Forwarding base URL | `https://api.openai.com` |
|LOG_CHAT| Whether to log chat content | `true` |
|ROUTE_PREFIX| Route prefix | None |

**TODO**

| Environment Variable | Description | Default Value |
|-----------------|------------|:------------------------:|
| IP_WHITELIST | IP whitelist | None |
| IP_BLACKLIST | IP blacklist | None |
| IP_BLACKLIST | IP blacklist | None |
13 changes: 4 additions & 9 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
</h1>
<p align="center">
<b> OpenAI API 接口转发服务 <br/>
The fastest way to deploy openai api forward proxy </b>
The fastest way to deploy openai api forwarding </b>
</p>

[//]: # ( <a href="https://github.com/beidongjiedeguang">)
Expand Down Expand Up @@ -77,7 +77,6 @@ api的服务器上,通过该服务转发OpenAI的请求。即搭建反向代
- [x] docker部署
- [x] 支持多进程转发
- [x] 支持指定转发路由前缀
- [ ] 聊天内容安全:聊天内容流式过滤

# Usage

Expand Down Expand Up @@ -138,7 +137,7 @@ docker run -d -p 3000:3000 -e OPENAI_API_KEY="sk-xxx" -e CODE="<your password>"

提供两种服务部署方式,选择一种即可

## pip
## pip (推荐)

**安装**

Expand All @@ -164,7 +163,7 @@ OPENAI_API_KEY="sk-xxx" openai_forward run --port=9999 --workers=1

注: 如果既存在默认api key又在请求头中传入了api key,则以请求头中的api key会覆盖默认api key.

## Docker(推荐)
## Docker

```bash
docker run --name="openai-forward" -d -p 9999:8000 beidongjiedeguang/openai-forward:latest
Expand Down Expand Up @@ -205,10 +204,6 @@ http://{ip}:{port}/v1/chat/completions
| OPENAI_BASE_URL | 转发base url | `https://api.openai.com` |
|LOG_CHAT| 是否记录聊天内容 | `true` |
|ROUTE_PREFIX| 路由前缀 ||

**TODO**

| 环境变量 | 说明 | 默认值 |
|-----------------|------------|:----------------------:|
| IP_WHITELIST | ip白名单 ||
| IP_BLACKLIST | ip黑名单 ||

2 changes: 1 addition & 1 deletion docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM python:3.10-alpine
FROM beidongjiedeguang/ubuntu:22.04
LABEL maintainer="kunyuan"
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
Expand Down
61 changes: 33 additions & 28 deletions openai_forward/_base.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from fastapi import Request
from fastapi import Request, HTTPException, status
from fastapi.responses import StreamingResponse
from loguru import logger
import httpx
Expand All @@ -8,32 +8,37 @@


class OpenaiBase:
default_api_key = os.environ.get("OPENAI_API_KEY", "")
base_url = os.environ.get("OPENAI_BASE_URL", "https://api.openai.com")
LOG_CHAT = os.environ.get("LOG_CHAT", "False").lower() == "true"
ROUTE_PREFIX = os.environ.get("ROUTE_PREFIX", "")
if ROUTE_PREFIX:
if ROUTE_PREFIX.endswith('/'):
ROUTE_PREFIX = ROUTE_PREFIX[:-1]
if not ROUTE_PREFIX.startswith('/'):
ROUTE_PREFIX = '/' + ROUTE_PREFIX
_default_api_key = os.environ.get("OPENAI_API_KEY", "").strip()
_base_url = os.environ.get("OPENAI_BASE_URL", "https://api.openai.com").strip()
_LOG_CHAT = os.environ.get("LOG_CHAT", "False").lower() == "true"
_ROUTE_PREFIX = os.environ.get("ROUTE_PREFIX", "").strip()
IP_WHITELIST = os.environ.get("IP_WHITELIST", "").strip()
IP_BLACKLIST = os.environ.get("IP_BLACKLIST", "").strip()
if IP_BLACKLIST:
IP_BLACKLIST = [i.strip() for i in IP_BLACKLIST.split(' ')]
else:
IP_BLACKLIST = []
if IP_WHITELIST:
IP_WHITELIST = [i.strip() for i in IP_WHITELIST.split(' ')]
else:
IP_WHITELIST = []
if _ROUTE_PREFIX:
if _ROUTE_PREFIX.endswith('/'):
_ROUTE_PREFIX = _ROUTE_PREFIX[:-1]
if not _ROUTE_PREFIX.startswith('/'):
_ROUTE_PREFIX = '/' + _ROUTE_PREFIX
stream_timeout = 20
timeout = 30
non_stream_timeout = 30
allow_ips = []
chatsaver = ChatSaver(save_interval=10)

def add_allowed_ip(self, ip: str):
if ip == "*":
...
else:
self.allow_ips.append(ip)

def validate_request_host(self, ip):
if ip == "*" or ip in self.allow_ips:
return True
else:
return False
if self.IP_WHITELIST and ip not in self.IP_WHITELIST:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail=f"Forbidden, ip={ip} not in whitelist!")
if self.IP_BLACKLIST and ip in self.IP_BLACKLIST:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail=f"Forbidden, ip={ip} in blacklist!")

@classmethod
def log_chat_completions(cls, bytes_: bytes):
Expand All @@ -49,27 +54,27 @@ async def aiter_bytes(cls, r: httpx.Response):
try:
cls.log_chat_completions(bytes_)
except Exception as e:
logger.warning(e)
logger.warning(f"small log chat error:\n{e=}")

@classmethod
async def _reverse_proxy(cls, request: Request):
client: httpx.AsyncClient = request.app.state.client
url_path = request.url.path
url_path = url_path[len(cls.ROUTE_PREFIX):]
url_path = url_path[len(cls._ROUTE_PREFIX):]
url = httpx.URL(path=url_path, query=request.url.query.encode('utf-8'))
headers = dict(request.headers)
auth = headers.pop("authorization", None)
if auth and str(auth).startswith("Bearer sk-"):
tmp_headers = {'Authorization': auth}
elif cls.default_api_key:
auth = "Bearer " + cls.default_api_key
elif cls._default_api_key:
auth = "Bearer " + cls._default_api_key
tmp_headers = {'Authorization': auth}
else:
tmp_headers = {}

headers.pop("host", None)
headers.update(tmp_headers)
if cls.LOG_CHAT:
if cls._LOG_CHAT:
try:
input_info = await request.json()
msgs = input_info['messages']
Expand All @@ -79,15 +84,15 @@ async def _reverse_proxy(cls, request: Request):
"messages": [{msg['role']: msg['content']} for msg in msgs],
})
except Exception as e:
logger.warning(e)
logger.warning(f"small log chat error:\n{request.client.host=}: {e}")
req = client.build_request(
request.method, url, headers=headers,
content=request.stream(),
timeout=cls.timeout,
)
r = await client.send(req, stream=True)

aiter_bytes = cls.aiter_bytes(r) if cls.LOG_CHAT else r.aiter_bytes()
aiter_bytes = cls.aiter_bytes(r) if cls._LOG_CHAT else r.aiter_bytes()
return StreamingResponse(
aiter_bytes,
status_code=r.status_code,
Expand Down
Loading

0 comments on commit b80d41c

Please sign in to comment.