You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
cebtenzzre
changed the title
Certain version of MPT GGUF model not usable anymore
[Feature] Support old MPT GGUF conversions with duplicated output tensor
May 9, 2024
Version 2.8.0 crashes when loading the model named above.
dlippold
changed the title
[Feature] Support old MPT GGUF conversions with duplicated output tensor
[Feature] Crash: Support old MPT GGUF conversions with duplicated output tensor
Jun 29, 2024
Bug Report
The fine-tuned MPT model from https://huggingface.co/maddes8cht/mosaicml-mpt-7b-instruct-gguf/ in quantization Q4_1 was usabel in release 2.7.2 but not longer in 2.7.3 and later. In particular it is currently not usable.
When I try to load the model file I get the following error message:
The reason of the problem may have to do with #2006
Steps to Reproduce
Expected Behavior
The model file should be loaded.
Your Environment
The text was updated successfully, but these errors were encountered: