You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I using llama.cpp convert the model to gguf, but the results aren't ideal and feedback is all wrong.
Used the following parameters: --ctx 4096 --outtype q8_0.
I think this might be my problem, and I want to make it usable.
I want it to be available on gpt4all and add it to the download list.
The text was updated successfully, but these errors were encountered:
This model should already work, including GPU support. Try downloading a Q4_0 GGUF, e.g. the one from here (this is a finetune but it's the same model architecture). Make sure to set the prompt template (including the blank line after):
I found an AI model on GitHub:
https://github.com/baichuan-inc/Baichuan2
I using llama.cpp convert the model to gguf, but the results aren't ideal and feedback is all wrong.
Used the following parameters: --ctx 4096 --outtype q8_0.
I think this might be my problem, and I want to make it usable.
I want it to be available on gpt4all and add it to the download list.
The text was updated successfully, but these errors were encountered: