Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: 1.7 RuntimeError: Torch is not able to use GPU #335

Closed
6 tasks
SunGreen777 opened this issue Dec 24, 2023 · 12 comments
Closed
6 tasks

[Bug]: 1.7 RuntimeError: Torch is not able to use GPU #335

SunGreen777 opened this issue Dec 24, 2023 · 12 comments
Labels
question Further information is requested

Comments

@SunGreen777
Copy link

SunGreen777 commented Dec 24, 2023

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

I have 2 configs files
1 set COMMANDLINE_ARGS= --medvram --opt-sdp-no-mem-attention --api
and
2 set COMMANDLINE_ARGS=--medvram --no-half --precision full --always-batch-cond-uncond --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --disable-nan-check --upcast-sampling --skip-torch-cuda-test --use-cpu interrogate gfpgan scunet codeformer

The first error, see the screenshot, the second does not use the video card.
rx570 8gb

image

Steps to reproduce the problem

skip

What should have happened?

skip

What browsers do you use to access the UI ?

No response

Sysinfo

skip

Console logs

skip

Additional information

skip

@Anders1974
Copy link

Добавь в аргументы как и написано --skip-torch-cuda-test

@Reizer88
Copy link

If error after update, Maybe this will help #334

@lshqqytiger
Copy link
Owner

DirectML is now optional. Not a fallback. Please add --use-directml to skip CUDA test.

@SunGreen777
Copy link
Author

--use-directml
is ok thanks

@ride5k
Copy link

ride5k commented Dec 24, 2023

DirectML is now optional. Not a fallback. Please add --use-directml to skip CUDA test.

this worked for me as well. had previously been using "--backend directml" which is no longer valid.

@lightgauge
Copy link

When I use --use-directml it says "module 'torch' has no attribute 'dml'"

@RaiMelken
Copy link

RaiMelken commented Dec 24, 2023

When I use --use-directml it says "module 'torch' has no attribute 'dml'"

I confirm that I received exactly the same error. This morning I received an update after running "webui.bat" with the "git pull" command for version 1.7, and after a series of errors and a complete reinstallation of GIT and Python, which I spent the whole day on, I realized that you simply updated the version of your SD. I believe that the new version is better ^_^, but everything worked very well for me on version 1.6, and therefore I do not need updates, but unfortunately I did not remove the "git pull" command from autorun. Can you please make access to previous versions, or provide a link to download 1.6? I don't know how to fix it, I just know how it just worked before, sorry T____T
(If I'm wrong about something, you can correct me)

My PC settings if needed:
CPU: QuadCore Intel Core i5-6400
RAM: 32GB
Video: Radeon RX 5500 XT 8gb

@lightgauge
Copy link

I would really like to go back to 1.6 if possible as the current version is nothing but problems. I have to add --skip-torch-cuda-test --precision full --no-half to get it do work when I didn't need to before. Now generation time for 1 image is giving me an eta of 1 hour. I've tried a handful of suggestions that don't work and went as far as uninstalling everything and doing a clean install and still have problems.

@SUAN4423
Copy link

After adding "torch-directml" to "requirements_versions.txt" and reinstalling venv, Stable Diffusion Web UI works correctly with the --use-directml option.

@wwbs4564
Copy link

After adding "torch-directml" to "requirements_versions.txt" and reinstalling venv, Stable Diffusion Web UI works correctly with the --use-directml option.

DirectML is now optional. Not a fallback. Please add --use-directml to skip CUDA test.

combine these 2 changes, it works perfectly fine.

@RaiMelken
Copy link

During that night I found a copy of this SD 1.6.0 in the "Wayback Machine"
If you can't run 1.7, you can try this ^_^
This is a last resort option if all else fails

@Drael64
Copy link

Drael64 commented Jan 5, 2024

During that night I found a copy of this SD 1.6.0 in the "Wayback Machine" If you can't run 1.7, you can try this ^_^ This is a last resort option if all else fails

Like a zip file? How did you do that?

@lshqqytiger lshqqytiger added the question Further information is requested label Jan 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

10 participants