-
Notifications
You must be signed in to change notification settings - Fork 56.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Allow loading various local custom models #2303
Comments
Title: [Feature] |
This line of code filters all models from backend with a prefix |
The point is that how to filter out chat models from all models? |
Two possible directions in my opinion (or maybe both)
|
feat: close #2303 add custom model name config
That was incredibly fast! Would highly recommend it to all my colleagues and friends who want to run their own models! |
fix: #2303 should be able to select custom models
feat: close ChatGPTNextWeb#2303 add custom model name config
fix: ChatGPTNextWeb#2303 should be able to select custom models
feat: close ChatGPTNextWeb#2303 add custom model name config
fix: ChatGPTNextWeb#2303 should be able to select custom models
First thanks for the awesome project. It has become my go-to ui solutions for ChatGPT and all compatible local services due to its smoothy interface and strong functionalities.
Is your feature request related to a problem? Please describe.
When connecting to local openai-compatible services like LocalAI or text-generation-webui, custom models like "Chinese-Alpaca-13b-plus-ggml-q5_1.bin" cannot be selected directly. I have to fake gpt-3.5-turbo which makes it harder to switch models.
Describe the solution you'd like
Display models from /models and allow users to select from that. A refresh button along side the "models" drop down list may be even better to use.
Describe alternatives you've considered
Allow users to select "custom" in the list and fill the "model" input manually.
Additional context
I have seen that the recent build contains functionalities related to /models, but after trying it I found listed are still gpt-3.5-turbo .etc even with local services. Please correct me if the feature has been included.
== translated by ChatGPT via ChatGPT-Next-Web
首先,感谢您提供这个出色的项目。由于其流畅的界面和强大的功能,它已成为我使用ChatGPT和所有兼容的本地服务时的首选UI解决方案。
您的功能请求是否与某个问题相关?请描述一下
当连接到本地的与OpenAI兼容的服务,如LocalAI或text-generation-webui时,无法直接选择自定义模型,如"Chinese-Alpaca-13b-plus-ggml-q5_1.bin"。我必须伪装成gpt-3.5-turbo,这使得切换模型更加困难。
描述您希望的解决方案
显示/models中的模型,并允许用户从中进行选择。在“models”下拉列表旁边添加一个刷新按钮可能会更好使用。
描述您考虑过的其它方案
允许用户在列表中选择"custom"并手动填写“model”输入。
其他相关信息
我注意到最近的版本包含了与/models相关的功能,但我在尝试之后发现即使在使用本地服务时,列出的模型仍然是gpt-3.5-turbo等等。如果我的理解有误,请纠正我。
The text was updated successfully, but these errors were encountered: