-
Notifications
You must be signed in to change notification settings - Fork 421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] torch._C._LinAlgError: linalg.cholesky always raised #572
Labels
bug
Something isn't working
Comments
I got the same error. have you solved the error now? |
Replacing calibration dataset seems to work sometime,but it is not the way that I wanna use. |
Hi @1649759610 @Kk1984up seem to be the same issue: IST-DASLab/gptq#8 (comment) Here's Elias suggestion: IST-DASLab/gptq#8 (comment) |
The same error. |
29 tasks
29 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
Hi, @PanQiWei @TheBloke
Thanks for all contributions in quantization.
Recently, I try quantize a llama-style model with 16B params and 32k length,but unfortunately an Exception raised when quantize model at the 43-th layer. More detailed error msg is shown below .
I tried some solutions to solve it with more calibration examples and damp_percent:
But no matter what combination of the two params, torch._C._LinAlgError: linalg.cholesky is always raised when quantize model at the 43-th layer. More detailed error msg is shown below.
Any help would be appreciated.
Hardware details
GPU: A800
RAM: 400G
Software version
autogptq: 0.7.0+cu118
torch: 2.0.1
cuda: 11.8
python: 3.10
The text was updated successfully, but these errors were encountered: