You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’ve encountered an issue when attempting to load the quantized version of the Meta-Llama-3-8B-Instruct model. After applying quantization to Q8, I’m facing a loading error that seems to be related to tensor dimensions.
Error Message:
llama_model_load: error loading model: check_tensor_dims: tensor 'output_norm.weight' not found
This error suggests that the ‘output_norm.weight’ tensor is missing or not recognized. I’ve double-checked the quantization steps and the issue persists.
Any assistance in resolving this would be greatly appreciated.
All I need is a way to quantize that model in different ways so I can the assess the degradation.
Thank you!
The text was updated successfully, but these errors were encountered:
Hello,
I’ve encountered an issue when attempting to load the quantized version of the Meta-Llama-3-8B-Instruct model. After applying quantization to Q8, I’m facing a loading error that seems to be related to tensor dimensions.
Error Message:
llama_model_load: error loading model: check_tensor_dims: tensor 'output_norm.weight' not found
This error suggests that the ‘output_norm.weight’ tensor is missing or not recognized. I’ve double-checked the quantization steps and the issue persists.
Any assistance in resolving this would be greatly appreciated.
All I need is a way to quantize that model in different ways so I can the assess the degradation.
Thank you!
The text was updated successfully, but these errors were encountered: