Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: invalid file magic when trying to import a custom gguf model to ollama instance #4000

Closed
atb29 opened this issue Apr 28, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@atb29
Copy link

atb29 commented Apr 28, 2024

What is the issue?

i got this error
"E:\phi3-mini-128k-gguf\model>ollama create phi-3-mini-128k -f Modelfile
transferring model data
creating model layer
Error: invalid file magic"

here is the modelfile used
FROM ./Phi-3-mini-128k-instruct.IQ4_XS.gguf

PARAMETER num_ctx 65536
PARAMETER num_keep 4
PARAMETER stop <|user|>
PARAMETER stop <|assistant|>
PARAMETER stop <|system|>
PARAMETER stop <|end|>
PARAMETER stop <|endoftext|>

TEMPLATE """
{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ end }}<|assistant|>
{{ .Response }}<|end|>
"""

OS

Windows

GPU

No response

CPU

No response

Ollama version

0.1.31

@atb29 atb29 added the bug Something isn't working label Apr 28, 2024
@BruceMacD
Copy link
Contributor

Hi @atb29 sorry about this issue, this error is due to the fact you are trying to load phi with IQ quantization, which isn't supported yet.

It looks like there are some other open issues to get Ollama to support IQ quantization, so I'm going to resolve this issue for now to keep things organized around #3622. Please follow that issue to keep up to date with when we get the IQ support in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants