Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for specifying llama model files. #402

Closed
wants to merge 25 commits into from

Conversation

Ellen7ions
Copy link
Contributor

@Ellen7ions Ellen7ions commented Aug 2, 2023

Features

In this PR, we have removed the original Enable Offline button and replaced it with an input box for the user to specify a different Llama model file.

You can put your models in the folder ~/user/.cache/gpt4all/ and input the model name in the box.
You can put your llama model anywhere and input your model path in the box.
图片替换文本
7bed42880e740bbe9ee93907bdb4ddd8

Changes

  1. Add a new API for specifying model name.
@api.post("/config/data/processor/conversation/offline", status_code=200)
  1. The original Enable Offline Chat API no longer enables, just disables offline mode.
@api.post("/config/data/processor/conversation/enable_offline_chat", status_code=200)

Add new API (/config/data/processor/conversation/offline) to set custom Llama2 model. The original API no longer sets the offline model to be enabled, now it only sets it to be disabled.
@sabaimran
Copy link
Collaborator

Hey @Ellen7ions , thanks for raising the PR! It would be great if we could discuss some of the motivations and in this discussion #408

@sabaimran sabaimran closed this Aug 4, 2023
@sabaimran
Copy link
Collaborator

Closing this, as custom model support may require some more investigation and discussion over use cases. Discussed in #408.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants