-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] OLLAMA 配置偏差 #1351
Comments
👀 @QIN2DIM Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
@sjy 看来需要补一个 OLLAMA_CUSTOM_MODELS 的环境变量 |
@sjy It seems that you need to add an environment variable of OLLAMA_CUSTOM_MODELS |
@arvinxx 也许可以像 #1352 一样,从 ollama 暴露的 好处就是可以直接从 response 拿到 ModelProviderCard 中的 model_id,调用模型的时候不会有传参上的问题。 不太方便的地方就是扫到的 model_id 毕竟是代码化的命名,要 display 到前端可能需要单独命名才会好看,logo,vision,functionCall 之类的设置就更不好搞了。 不过考虑到 Ollama 还支持开发者从 huggingface 之类的平台上拉取开源模型自己 build 量化模型,命名也就千奇百怪了,如果能直接利用 Ollama 的基础设施,那缝上 lobe-chat 门槛会低不少。 ModelProviderCardhttps://github.com/lobehub/lobe-chat/blob/main/src/config/modelProviders/ollama.ts chatModels:
- displayName: Qwen Chat 70B
functionCall: false
hidden: true
id: qwen:70b-chat
tokens: 32768
vision: false
- displayName: Mistral
functionCall: false
id: mistral
tokens: 4800
vision: false
|
@sjy 是不是和我说的差不多?大家都往这方面想了 |
@sjy Is it similar to what I said? Everyone is thinking about this |
|
|
✅ @QIN2DIM This issue is closed, If you have any questions, you can comment and reply. |
💻 Operating System
Ubuntu
📦 Environment
Docker
🌐 Browser
Firefox
🐛 Bug Description
OLLAMA 默认的 modelcard 显示错误
机器上没有部署默认显示的 model,直接选中开启聊天会报错。例如:没有下载 llama2 的模型。
添加了自定义模型
yi:34b-chat
(已下载)则正常通信:我尝试在
docker-compose.yaml
填写环境变量CUSTOM_MODELS: -llama2
但似乎对 ollama 的模型卡片不起作用。🚦 Expected Behavior
No response
📷 Recurrence Steps
No response
📝 Additional Information
No response
The text was updated successfully, but these errors were encountered: