Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantization ok but check_tensor_dims: tensor 'output_norm.weight' #7423

Closed
0wwafa opened this issue May 21, 2024 · 1 comment
Closed

Quantization ok but check_tensor_dims: tensor 'output_norm.weight' #7423

0wwafa opened this issue May 21, 2024 · 1 comment

Comments

@0wwafa
Copy link

0wwafa commented May 21, 2024

Hello,

I’ve encountered an issue when attempting to load the quantized version of the Meta-Llama-3-8B-Instruct model. After applying quantization to Q8, I’m facing a loading error that seems to be related to tensor dimensions.

Error Message:

llama_model_load: error loading model: check_tensor_dims: tensor 'output_norm.weight' not found

This error suggests that the ‘output_norm.weight’ tensor is missing or not recognized. I’ve double-checked the quantization steps and the issue persists.

Any assistance in resolving this would be greatly appreciated.

All I need is a way to quantize that model in different ways so I can the assess the degradation.

Thank you!

@0wwafa
Copy link
Author

0wwafa commented May 21, 2024

My bad. I thought of everything except that the module was corrupted.

@0wwafa 0wwafa closed this as completed May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant