-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: invalid file magic when creating an xs model #2321
Comments
@jmorganca , does this a problem in my side only or IQ xs models aren't supported yet? |
Someone managed to do it. Also since it seems to be supported will IQ3_XXS support be added? I have also been trying to do this but with no success I even compiled version 0.1.25 and 0.1.21 as stated in the post. Edit: |
I'm unable to reproduce with the latest version of Ollama. I'm going to close this for now, but please reopen if the issues persists. My output using your provided Modelfile and the gguf model:
|
It seems only certain IQ quants are supported? Could we get the rest supported or can a list of the supported ones be posted prominently on the main readme? Kind of annoying to do all the work only to find out it's not supported |
i just stumbled upon that error. first i downloaded
|
Hi,
I tried to create a new model using this gguf file chat-67b-xs.gguf but i didn't work and gave me this output.
I think the xs models is not being supported yet by ollama, but it is working fine the same file using llama.cpp
~/dev/llama.cpp/main --color --instruct -ngl 100 -m deepseek-chat-67b-xs.gguf
Modelfile
The text was updated successfully, but these errors were encountered: