-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Linux] CUDA error upon trying to generate #38
Comments
Instead of renaming your local files, try changing this line: UnstableFusion/diffusionserver.py Line 34 in 91d40dd
to this: |
Done, but no effect. Exact same error. And Torch reserving memory but not clearing it properly with each crash means I get out of memory errors if I try to do it for a second time without restarting. |
I assume it is a memory error? Maybe try enabling attention slicing and see if it helps: |
No change, though it was faster to get to starting the generation. Monitoring CPU, memory, GPU & GPU memory via both htop & nvtop. No topping out on either of them, so I presume is not a memory issue. |
What is your cuda version? |
NVCC:
|
Pytorch doesn't support |
Downgraded to:
Same error occurs. Can try and downgrade further if required, but it is 11.6. |
OS: Arch Linux rolling
GPU: GTX 1660 SUPER
Driver: nvidia-520.56.06
CUDA: cuda-tools installed
Whenever I go to generate, it crashes on a CUDA error, as shown below. I have all dependencies listed plus a fair few others since it also pointed out I didn't have them. I'm using a local clone of v1.4 of the diffusion model renamed to v1.5 because the program wants to only accept a v1.5 folder despite it not being out for the public (as far as I can tell) and the HTTPX request fails when I go for the access key.
See terminal output below (the backslashes in the last line are to be ignored, interfered with code block):
The text was updated successfully, but these errors were encountered: