Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ feat: support Google / Zhipu / AWS Bedrock model providers #1173

Merged
merged 49 commits into from
Feb 5, 2024

Conversation

arvinxx
Copy link
Contributor

@arvinxx arvinxx commented Jan 28, 2024

💻 变更类型 | Change Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 🔨 chore
  • ⚡️ perf
  • 📝 docs

🔀 变更说明 | Description of Change

支持多模型服务商能力,计划一期支持:

  • 智谱 GLM-4
  • AWS BedRock
  • Google Gemini

本期目标:

  • 先快速支持业务诉求和用户呼声较多的三个服务商;
  • 初步搭建第一版多模型服务商下的数据管理模型(前端);
  • 抽出初步的 Agent Runtime 调用运行时;

变更项:

📝 补充信息 | Additional Information

refs:

@arvinxx arvinxx marked this pull request as draft January 28, 2024 16:26
Copy link

vercel bot commented Jan 28, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
lobe-chat ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 5, 2024 4:26am

@lobehubbot
Copy link
Member

👍 @arvinxx

Thank you for raising your pull request and contributing to our Community
Please make sure you have followed our contributing guidelines. We will review it as soon as possible.
If you encounter any problems, please feel free to connect with us.
非常感谢您提出拉取请求并为我们的社区做出贡献,请确保您已经遵循了我们的贡献指南,我们会尽快审查它。
如果您遇到任何问题,请随时与我们联系。

@mushan0x0
Copy link
Contributor

mushan0x0 commented Jan 28, 2024

我觉得也许可以封装为一个 openai-adapter 的 npm 包,也许别的项目也会用到,再用适配器为每一个模型做适配,类似于 axios 的 adapter
https://www.cnblogs.com/xumengxuan/p/13876906.html

有人做了一个这个 llm-adapter https://github.com/fardjad/node-llmatic/blob/master/src/llm-adapter.ts#L71

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I think it may be encapsulated into an openai-adapter npm package, which may also be used by other projects, and then use an adapter to adapt each model, similar to the adapter of axios
https://www.cnblogs.com/xumengxuan/p/13876906.html

@arvinxx
Copy link
Contributor Author

arvinxx commented Jan 29, 2024

我觉得也许可以封装为一个 openai-adapter 的 npm 包,也许别的项目也会用到

个人感觉可能意义不是特别大。比如你去看 ai sdk 对于多种服务商的接入介绍

https://sdk.vercel.ai/docs/guides/providers/mistral

每种接入基本上都不用花太多成本,而且甚至有不少直接用 openai 都可以。

反而是封装了以后会丧失灵活性

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I think it may be encapsulated into an openai-adapter npm package, and maybe other projects will also use it.

Personally, I don't think it means much. For example, if you look at ai sdk’s introduction to access to various service providers

https://sdk.vercel.ai/docs/guides/providers/mistral

Basically, each type of access does not cost much, and many of them can even be used directly with openai.

On the contrary, after encapsulation, flexibility will be lost.

@ShinChven
Copy link

我用的 one-api,支持把大部分的 LLM 转成 openai sdk 的协议。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


The one-api I use supports converting most LLM to openai sdk protocols.

Copy link
Contributor

@mushan0x0 mushan0x0 Jan 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样的话就还是会走服务器了,本地 LLM 后面就可能再重新实现一遍,做一个纯前端的大模型适配器可能会好一点,在前端将所有模型适配为 OpenAI 的协议

Copy link
Contributor

@mushan0x0 mushan0x0 Jan 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

纯前端实现还可以支持本地代理

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我是想分两步走,第一步是原有的后端实现改造,第二步是再在前端实现一遍对应的逻辑(通用部分会抽到 agent-runtime 去),这样用户可以自行选择是走前端还是后端。

refs:

这个 PR 由于暂不计划包含本地 LLM 的方案,所以这一期计划是先把服务端的部分改造完

Copy link

codecov bot commented Jan 31, 2024

Codecov Report

Attention: Patch coverage is 65.39989% with 610 lines in your changes missing coverage. Please review.

Project coverage is 86.78%. Comparing base (8c349a1) to head (332bbf5).
Report is 1848 commits behind head on main.

Files with missing lines Patch % Lines
src/libs/agent-runtime/google/index.ts 16.25% 134 Missing ⚠️
src/libs/agent-runtime/bedrock/index.ts 17.16% 111 Missing ⚠️
src/libs/agent-runtime/zhipu/index.ts 15.20% 106 Missing ⚠️
src/services/_auth.ts 20.75% 84 Missing ⚠️
src/libs/agent-runtime/azureOpenai/index.ts 22.22% 56 Missing ⚠️
src/libs/agent-runtime/openai/index.ts 81.30% 20 Missing ⚠️
src/libs/agent-runtime/zhipu/authToken.ts 13.63% 19 Missing ⚠️
src/libs/agent-runtime/utils/debugStream.ts 5.55% 17 Missing ⚠️
src/app/api/errorResponse.ts 62.79% 16 Missing ⚠️
src/libs/agent-runtime/utils/uriParser.ts 6.25% 15 Missing ⚠️
... and 7 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1173      +/-   ##
==========================================
- Coverage   91.29%   86.78%   -4.51%     
==========================================
  Files         184      211      +27     
  Lines        8858    10355    +1497     
  Branches     1070     1133      +63     
==========================================
+ Hits         8087     8987     +900     
- Misses        771     1368     +597     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@arvinxx arvinxx changed the title ✨ feat: support more model providers ✨ feat: support Google / Zhipu /AWS Bedrock model providers Feb 5, 2024
@arvinxx arvinxx changed the title ✨ feat: support Google / Zhipu /AWS Bedrock model providers ✨ feat: support Google / Zhipu / AWS Bedrock model providers Feb 5, 2024
@arvinxx arvinxx merged commit d5929f6 into main Feb 5, 2024
8 of 10 checks passed
@arvinxx arvinxx deleted the feat/model-provider branch February 5, 2024 04:59
@lobehubbot
Copy link
Member

❤️ Great PR @arvinxx ❤️

The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world.
项目的成长离不开用户反馈和贡献,感谢您的贡献! 如果您对 LobeHub 开发者社区感兴趣,请加入我们的 discord,然后私信 @arvinxx@canisminor1990。他们会邀请您加入我们的私密开发者频道。我们将会讨论关于 Lobe Chat 的开发,分享和讨论全球范围内的 AI 消息。

github-actions bot pushed a commit that referenced this pull request Feb 5, 2024
## [Version 0.123.0](v0.122.9...v0.123.0)
<sup>Released on **2024-02-05**</sup>

#### ✨ Features

- **misc**: Support Google / Zhipu / AWS Bedrock model providers.

<br/>

<details>
<summary><kbd>Improvements and Fixes</kbd></summary>

#### What's improved

* **misc**: Support Google / Zhipu / AWS Bedrock model providers, closes [#1173](#1173) ([d5929f6](d5929f6))

</details>

<div align="right">

[![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)

</div>
@lobehubbot
Copy link
Member

🎉 This PR is included in version 0.123.0 🎉

The release is available on:

Your semantic-release bot 📦🚀

github-actions bot pushed a commit to bentwnghk/lobe-chat that referenced this pull request Feb 5, 2024
## [Version&nbsp;1.5.0](v1.4.11...v1.5.0)
<sup>Released on **2024-02-05**</sup>

#### ✨ Features

- **misc**: Support Google / Zhipu / AWS Bedrock model providers.

#### 🐛 Bug Fixes

- **misc**: Fix rename.

#### 💄 Styles

- **settings**: Improve LLM connection checker style.

<br/>

<details>
<summary><kbd>Improvements and Fixes</kbd></summary>

#### What's improved

* **misc**: Support Google / Zhipu / AWS Bedrock model providers, closes [lobehub#1173](https://github.com/bentwnghk/lobe-chat/issues/1173) ([d5929f6](d5929f6))

#### What's fixed

* **misc**: Fix rename ([f6ecdff](f6ecdff))

#### Styles

* **settings**: Improve LLM connection checker style, closes [lobehub#1222](https://github.com/bentwnghk/lobe-chat/issues/1222) ([8c349a1](8c349a1))

</details>

<div align="right">

[![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)

</div>
@arunkumarakvr
Copy link

Needed support for Google Gemini 1 and PaLM 2 Models from Vertex AI Studio like Azure OpenAI Service not the ones that are already added from Google AI Studio like OpenAI Platform.

miroshar-success added a commit to miroshar-success/OpenAI_Integraion_platform that referenced this pull request Apr 5, 2024
## [Version&nbsp;0.123.0](lobehub/lobe-chat@v0.122.9...v0.123.0)
<sup>Released on **2024-02-05**</sup>

#### ✨ Features

- **misc**: Support Google / Zhipu / AWS Bedrock model providers.

<br/>

<details>
<summary><kbd>Improvements and Fixes</kbd></summary>

#### What's improved

* **misc**: Support Google / Zhipu / AWS Bedrock model providers, closes [#1173](lobehub/lobe-chat#1173) ([d5929f6](lobehub/lobe-chat@d5929f6))

</details>

<div align="right">

[![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)

</div>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment