- 统一各厂家、各个大语言对话模型的统一调用(如deepseek-r1)
- 本项目兼容openai /chat/completions 接口
- 模型接口的多平台负载,服务的可以支持配置多个平台的api地址配置
- 多平台模型名称统一(如deepseek-r1在平台1的model叫deepseek-r1-250120,在平台2叫DeepSeek-R1等等)
- 接口所使用的真实的平台api和账户对客户端完全透明
软件架构说明 docker+openresty+nodejs
- 请自己寻找一篇服务器docker安装教程
- docker安装openresty
docker run -itd --restart=always --net=host -v /data/docker/nginx/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf -v /data/docker/nginx/logs:/usr/local/openresty/nginx/logs -v /data:/data --name gateway openresty/openresty
- 构建nodejs服务端镜像
docker build -t uni-chat-model-api .
- 使用nodejs服务端镜像启动容器
docker run -d --restart=always -p 8882:8080 uni-chat-model-api
- 修改nginx.conf文件
- 偷了个懒,将所有lua代码都写在conf文件中了,这样更直观一点
- 修改自己的站点域名 server_name
- 这里是列表文本allowed_tokens可根据需要添加多个api key
local allowed_tokens = {
["Bearer myapikey1"] = true,
["Bearer myapikey2"] = true
}
- dict key为自己定义的模型名称,value为不同平台的url、key等信息
- 如平台的模型名称与自己定义的模型名称不一致时可设置model属性
- 配置多个平台接口可以实现多个接口负载,这里列举了火山引擎、阿里云百炼、openai接口的示例
local dict = {
["deepseek-r1"] = {
{url = "https://ark.cn-beijing.volces.com/api/v3", key = "xxx", model = "deepseek-r1-250120"},
{url = "https://api.siliconflow.cn/v1", key = "sk-xxx", model = "deepseek-ai/DeepSeek-R1"}
},
["deepseek-v3"] = {
{url = "https://ark.cn-beijing.volces.com/api/v3", key = "xxx", model = "deepseek-v3-241226"},
{url = "https://api.siliconflow.cn/v1", key = "sk-xxx", model = "deepseek-ai/DeepSeek-V3"}
},
["Doubao-1.5-pro-32k"] = {
{url = "https://ark.cn-beijing.volces.com/api/v3", key = "xxx", model = "ep-20250226144041-f58mg"},
},
["qwen-plus"] = {
{url = "https://dashscope.aliyuncs.com/compatible-mode/v1", key = "sk-xxx"},
},
["qwen-max"] = {
{url = "https://dashscope.aliyuncs.com/compatible-mode/v1", key = "sk-xxx"},
},
["gpt-4o"] = {
{url = "https://api.openai.com/v1", key = "sk-xxx"},
},
["gpt-4o-mini"] = {
{url = "http://api.openai.com/v1", key = "sk-xxx"},
}
}
- 最后在chatbox中设置
- 随便选一个模型就可以对话
- 最后火山引擎最近在搞活动,扫二维码注册账号就可以获得375万token
- 需要交流的可以加我vx:ishenglx,注明来意



