New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantized Phi-3 example fails "cannot find llama.attention.head_count in metadata" #2154
Comments
I believe this is not a candle issue. I have downloaded the model a few days ago and has no error running, while my colleague has the same error as @MoonKraken mentioned. Upon investigation, I found that my model file has a SHA256 hash of Googling my hash, I found mine on a branch that doesn't exist on the model page here and seems like the author has also continued to push to this branch, suggesting that this is the correct one. This might be a serious issue, suggesting that the commit has somehow "switched" to another one (the wrong one) without the author knowing it. |
Thanks for looking into this, that seems pretty odd for the hash to be modified like this. I've just modified the example code in #2156 so that it forces the use of the separate branch that you mentioned so hopefully that will fix it for your colleague and others. |
Thanks - this is a great temporary solution. Wondering if there is any way we can also flag to the huggingface team about this? |
Yeah not sure what they would think of this. |
hardware: M1 macbook
This issue does not occur when using phi-2 or with any other example that I've tried. It also still occurs even with the
metal
feature enabled.The text was updated successfully, but these errors were encountered: