Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support minimax ai model #1033

Merged
merged 4 commits into from
Jun 9, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 73 additions & 8 deletions plugins/wasm-go/extensions/ai-proxy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,14 @@ description: AI 代理插件配置参考

`provider`的配置字段说明如下:

| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
|----------------|-----------------|------|-----|----------------------------------------------------------------------------------|
| `type` | string | 必填 | - | AI 服务提供商名称。目前支持以下取值:openai, azure, moonshot, qwen, zhipuai, baidu |
| `apiTokens` | array of string | 必填 | - | 用于在访问 AI 服务时进行认证的令牌。如果配置了多个 token,插件会在请求时随机进行选择。部分服务提供商只支持配置一个 token。 |
| `timeout` | number | 非必填 | - | 访问 AI 服务的超时时间。单位为毫秒。默认值为 120000,即 2 分钟 |
| `modelMapping` | map of string | 非必填 | - | AI 模型映射表,用于将请求中的模型名称映射为服务提供商支持模型名称。<br/>可以使用 "*" 为键来配置通用兜底映射关系 |
| `protocol` | string | 非必填 | - | 插件对外提供的 API 接口契约。目前支持以下取值:openai(默认值,使用 OpenAI 的接口契约)、original(使用目标服务提供商的原始接口契约) |
| `context` | object | 非必填 | - | 配置 AI 对话上下文信息 |
| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
| -------------- | --------------- | -------- | ------ | ------------------------------------------------------------ |
| `type` | string | 必填 | - | AI 服务提供商名称。目前支持以下取值:openai, azure, moonshot, qwen, zhipuai, baidu, minimax |
| `apiTokens` | array of string | 必填 | - | 用于在访问 AI 服务时进行认证的令牌。如果配置了多个 token,插件会在请求时随机进行选择。部分服务提供商只支持配置一个 token。 |
| `timeout` | number | 非必填 | - | 访问 AI 服务的超时时间。单位为毫秒。默认值为 120000,即 2 分钟 |
| `modelMapping` | map of string | 非必填 | - | AI 模型映射表,用于将请求中的模型名称映射为服务提供商支持模型名称。<br/>可以使用 "*" 为键来配置通用兜底映射关系 |
| `protocol` | string | 非必填 | - | 插件对外提供的 API 接口契约。目前支持以下取值:openai(默认值,使用 OpenAI 的接口契约)、original(使用目标服务提供商的原始接口契约) |
| `context` | object | 非必填 | - | 配置 AI 对话上下文信息 |

`context`的配置字段说明如下:

Expand Down Expand Up @@ -93,6 +93,10 @@ Groq 所对应的 `type` 为 `groq`。它并无特有的配置字段。

文心一言所对应的 `type` 为 `baidu`。它并无特有的配置字段。

#### MiniMax

MiniMax所对应的 `type` 为 `minimax`。它并无特有的配置字段。

#### Anthropic Claude

Anthropic Claude 所对应的 `type` 为 `claude`。它特有的配置字段如下:
Expand Down Expand Up @@ -680,6 +684,67 @@ provider:
}
```

### 使用 OpenAI 协议代理MiniMax服务

**配置信息**

```yaml
provider:
type: minimax
apiTokens:
- "YOUR_MINIMAX_API_TOKEN"
modelMapping:
'gpt-3': "abab6.5-chat"
'*': "abab6.5s-chat"
```

**请求示例**

```json
{
"model": "gpt-4-turbo",
"messages": [
{
"role": "user",
"content": "你好,你是谁?"
}
],
"stream": false
}
```

**响应示例**

```json
{
"id": "02b2251f8c6c09d68c1743f07c72afd7",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "你好!我是MM智能助理,一款由MiniMax自研的大型语言模型。我可以帮助你解答问题,提供信息,进行对话等。有什么可以帮助你的吗?",
"role": "assistant"
}
}
],
"created": 1717760544,
"model": "abab6.5s-chat",
"object": "chat.completion",
"usage": {
"total_tokens": 106
},
"input_sensitive": false,
"output_sensitive": false,
"input_sensitive_type": 0,
"output_sensitive_type": 0,
"base_resp": {
"status_code": 0,
"status_msg": ""
}
}
```

## 完整配置示例

以下以使用 OpenAI 协议代理 Groq 服务为例,展示完整的插件配置示例。
Expand Down
4 changes: 1 addition & 3 deletions plugins/wasm-go/extensions/ai-proxy/provider/baidu.go
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,5 @@ func (b *baiduProvider) streamResponseBaidu2OpenAI(ctx wrapper.HttpContext, resp
}

func (b *baiduProvider) appendResponse(responseBuilder *strings.Builder, responseBody string) {
responseBuilder.WriteString(streamDataItemKey)
responseBuilder.WriteString(responseBody)
responseBuilder.WriteString("\n\n")
responseBuilder.WriteString(fmt.Sprintf("%s %s\n\n", streamDataItemKey, responseBody))
}
97 changes: 97 additions & 0 deletions plugins/wasm-go/extensions/ai-proxy/provider/minimax.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
package provider

import (
"errors"
"fmt"
"github.com/alibaba/higress/plugins/wasm-go/extensions/ai-proxy/util"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/higress-group/proxy-wasm-go-sdk/proxywasm"
"github.com/higress-group/proxy-wasm-go-sdk/proxywasm/types"
)

// minimaxProvider is the provider for minimax service.

const (
minimaxDomain = "api.minimax.chat"
minimaxChatCompletionPath = "/v1/text/chatcompletion_v2"
)

type minimaxProviderInitializer struct {
}

func (m *minimaxProviderInitializer) ValidateConfig(config ProviderConfig) error {
return nil
}

func (m *minimaxProviderInitializer) CreateProvider(config ProviderConfig) (Provider, error) {
return &minimaxProvider{
config: config,
contextCache: createContextCache(&config),
}, nil
}

type minimaxProvider struct {
config ProviderConfig
contextCache *contextCache
}

func (m *minimaxProvider) GetProviderType() string {
return providerTypeMinimax
}

func (m *minimaxProvider) OnRequestHeaders(ctx wrapper.HttpContext, apiName ApiName, log wrapper.Log) (types.Action, error) {
if apiName != ApiNameChatCompletion {
return types.ActionContinue, errUnsupportedApiName
}
_ = util.OverwriteRequestPath(minimaxChatCompletionPath)
_ = util.OverwriteRequestHost(minimaxDomain)
_ = proxywasm.ReplaceHttpRequestHeader("Authorization", "Bearer "+m.config.GetRandomToken())
_ = proxywasm.RemoveHttpRequestHeader("Content-Length")
return types.ActionContinue, nil
}

func (m *minimaxProvider) OnRequestBody(ctx wrapper.HttpContext, apiName ApiName, body []byte, log wrapper.Log) (types.Action, error) {
if apiName != ApiNameChatCompletion {
return types.ActionContinue, errUnsupportedApiName
}

request := &chatCompletionRequest{}
if err := decodeChatCompletionRequest(body, request); err != nil {
return types.ActionContinue, err
}

// 使用OpenAI接口协议,映射模型
if m.config.protocol == protocolOpenAI {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的判定是不是有问题?如果用户配置的不适用 OpenAI 的契约,那么上面就不能按照 OpenAI 的契约进行请求数据反序列化了。而且这个 protocol 指的是契约,不是 model。

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的判定是不是有问题?如果用户配置的不适用 OpenAI 的契约,那么上面就不能按照 OpenAI 的契约进行请求数据反序列化了。而且这个 protocol 指的是契约,不是 model。

这里的意思是只有使用openai协议时,才会进行模型映射。如果使用原生的接口协议,就使用接口传入的模型直接调用,不进行模型映射了

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不过 modelMapping 字段在定义上并没有说明只有在 OpenAI 协议上生效。这个逻辑可能会让用户感到迷惑。如果用户配置 Original 协议,一般也不会配 modelMapping 吧。

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不过 modelMapping 字段在定义上并没有说明只有在 OpenAI 协议上生效。这个逻辑可能会让用户感到迷惑。如果用户配置 Original 协议,一般也不会配 modelMapping 吧。

嗯,好的,这里我调整一下

model := request.Model
if model == "" {
return types.ActionContinue, errors.New("missing model in chat completion request")
}
mappedModel := getMappedModel(model, m.config.modelMapping, log)
if mappedModel == "" {
return types.ActionContinue, errors.New("model becomes empty after applying the configured mapping")
}
request.Model = mappedModel
}

if m.contextCache == nil {
return types.ActionContinue, replaceJsonRequestBody(request, log)
}

err := m.contextCache.GetContent(func(content string, err error) {
defer func() {
_ = proxywasm.ResumeHttpRequest()
}()
if err != nil {
log.Errorf("failed to load context file: %v", err)
_ = util.SendResponse(500, util.MimeTypeTextPlain, fmt.Sprintf("failed to load context file: %v", err))
}
insertContextMessage(request, content)
if err := replaceJsonRequestBody(request, log); err != nil {
_ = util.SendResponse(500, util.MimeTypeTextPlain, fmt.Sprintf("failed to replace request body: %v", err))
}
}, log)
if err == nil {
return types.ActionPause, nil
}
return types.ActionContinue, err
}
4 changes: 3 additions & 1 deletion plugins/wasm-go/extensions/ai-proxy/provider/provider.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ const (
providerTypeOllama = "ollama"
providerTypeBaidu = "baidu"
providerTypeHunyuan = "hunyuan"
providerTypeMinimax = "minimax"

protocolOpenAI = "openai"
protocolOriginal = "original"
Expand Down Expand Up @@ -73,6 +74,7 @@ var (
providerTypeOllama: &ollamaProviderInitializer{},
providerTypeBaidu: &baiduProviderInitializer{},
providerTypeHunyuan: &hunyuanProviderInitializer{},
providerTypeMinimax: &minimaxProviderInitializer{},
}
)

Expand Down Expand Up @@ -102,7 +104,7 @@ type ResponseBodyHandler interface {

type ProviderConfig struct {
// @Title zh-CN AI服务提供商
// @Description zh-CN AI服务提供商类型,目前支持的取值为:"moonshot"、"qwen"、"openai"、"azure"、"baichuan"、"yi"、"zhipuai"、"ollama"、"baidu"
// @Description zh-CN AI服务提供商类型,目前支持的取值为:"moonshot"、"qwen"、"openai"、"azure"、"baichuan"、"yi"、"zhipuai"、"ollama"、"baidu"、minimax"
typ string `required:"true" yaml:"type" json:"type"`
// @Title zh-CN API Tokens
// @Description zh-CN 在请求AI服务时用于认证的API Token列表。不同的AI服务提供商可能有不同的名称。部分供应商只支持配置一个API Token(如Azure OpenAI)。
Expand Down
Loading