Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: 基于模型服务商+模型名选择请求格式,而不是简单地依靠模型名 #4804

Open
takestairs opened this issue May 31, 2024 · 3 comments
Labels
duplicate This issue or pull request already exists enhancement New feature or request

Comments

@takestairs
Copy link

Problem Description

有如下情景:
经过 one-api 得到 openai 请求格式的 gemini-pro 模型,即请求格式类似:

POST {{one_base}}/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{one_key}}

{
  "model": "gemini-pro",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "hello"
    }
  ],
  "stream": true
}

如果填入one_base作为 openai 的自定义端点,填入one_key、自定义模型gemini-pro,仍然提示需要配置 Google 作为模型服务商。

image

类似的情景我相信还有很多。其实如果通过one_api来中转api,这边以 openai API 的客户端,完全可以轻松地实现多端点、多key的轮询。只需要让发起请求的api格式不简单地通过模型名称来分别。

Solution Description

对于这个问题,我的解决思路是:

  1. 区分自定义模型设置,各个模型服务商可以配置对应的自定义模型
  2. 使用模型服务商+模型名,唯一描述一个模型允许的请求格式
  3. 现有的识图功能可以基于模型名进行区分,模型服务商仅区别请求格式
  4. 基于用户选定的 模型名(服务商) 确定模型名和请求格式(OpenAI/Google/Azure等)

Alternatives Considered

No response

Additional Context

No response

@takestairs takestairs added the enhancement New feature or request label May 31, 2024
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: [Feature Request]: Select the request format based on the model service provider + model name, rather than simply relying on the model name

Problem Description

There are the following scenarios:
Get the gemini-pro model of the openai request format through one-api, that is, the request format is similar:

POST {{one_base}}/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{one_key}}

{
  "model": "gemini-pro",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "hello"
    }
  ],
  "stream": true
}

If you fill in one_base as the custom endpoint of openai, fill in one_key, and the custom model gemini-pro, you will still be prompted to configure Google as the model service provider.

image

I believe there are many similar situations. In fact, if you transfer the API through one_api, you can easily implement multi-endpoint and multi-key polling using the openai API client. Just make sure that the API format that makes the request is not simply distinguished by the model name.

Solution Description

My solution to this problem is:

  1. Differentiate custom model settings. Each model service provider can configure corresponding custom models.
  2. Use model service provider + model name to uniquely describe the request format allowed by a model.
  3. The existing image recognition function can distinguish based on the model name, and the model service provider only distinguishes the request format.
  4. Determine the model name and request format (OpenAI/Google/Azure, etc.) based on the model name (service provider) selected by the user

Alternatives Considered

No response

Additional Context

No response

@GrayXu
Copy link

GrayXu commented Jun 1, 2024

可以one-api里设置别名,next-web里再设置回显示名。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


You can set the alias in one-api, and then set the display name back in next-web.

@fred-bf fred-bf added the duplicate This issue or pull request already exists label Jun 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants