Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: txt2img uses only CPU and not GPU even when it's set to use GPU. #270

Closed
1 task done
Blaz1kennBG opened this issue Sep 9, 2023 · 8 comments
Closed
1 task done

Comments

@Blaz1kennBG
Copy link

Blaz1kennBG commented Sep 9, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

After the whole guide, i tried to create a picture but i noticed that the CPU was maxing out and the GPU was not moving. I have only one argument set: set COMMANDLINE_ARGS=--no-half

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

Use GPU on txt2img instead of CPU

Sysinfo

What browsers do you use to access the UI ?

Google Chrome

Console logs

.

Additional information

No errors whatsoever. Just regular creating but with CPU
sysinfo-2023-09-09-22-46.txt

@cheremo
Copy link

cheremo commented Sep 9, 2023

Which GPU/ vendor are u talking about?
If you're talking about an AMD-GPU try this (working @7800xt) commandline-args

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--medvram --precision full --no-half --no-half-vae --opt-split-attention --opt-sub-quad-attention --disable-nan-check
set SAFETENSORS_FAST_GPU=1
git pull
call webui.bat

@lshqqytiger
Copy link
Owner

Do you have dml under modules directory?

@Blaz1kennBG
Copy link
Author

Do you have dml under modules directory?

Yes i do have dml
image

@Blaz1kennBG
Copy link
Author

Blaz1kennBG commented Sep 10, 2023

Which GPU/ vendor are u talking about? If you're talking about an AMD-GPU try this (working @7800xt) commandline-args

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--medvram --precision full --no-half --no-half-vae --opt-split-attention --opt-sub-quad-attention --disable-nan-check
set SAFETENSORS_FAST_GPU=1
git pull
call webui.bat

AMD RX 6700 XT, I would try that now

EDIT: Does not work. CPU is still utilized at 100%. GPU and GPU Memory is barely touched. I did notice a very small 100% GPU Usage spike but it was for a second.

@lshqqytiger
Copy link
Owner

Try again with --backend directml --device-id 0. (0 is an example. Replace it if you have other cards installed)

@MariyanEOD
Copy link

Try again with --backend directml --device-id 0. (0 is an example. Replace it if you have other cards installed)

Arguments are as following: --no-half --backend directml --device-id 0 however i get an error about a torch_directml package

Traceback (most recent call last):
  File "D:\stable-diffusion-webui-directml\launch.py", line 48, in <module>
    main()
  File "D:\stable-diffusion-webui-directml\launch.py", line 44, in main
    start()
  File "D:\stable-diffusion-webui-directml\modules\launch_utils.py", line 476, in start
    import webui
  File "D:\stable-diffusion-webui-directml\webui.py", line 13, in <module>
    initialize.imports()
  File "D:\stable-diffusion-webui-directml\modules\initialize.py", line 34, in imports
    shared_init.initialize()
  File "D:\stable-diffusion-webui-directml\modules\shared_init.py", line 25, in initialize
    dml.initialize()
  File "D:\stable-diffusion-webui-directml\modules\dml\__init__.py", line 40, in initialize
    from modules.dml.backend import DirectML # pylint: disable=ungrouped-imports
  File "D:\stable-diffusion-webui-directml\modules\dml\backend.py", line 4, in <module>
    import torch_directml # pylint: disable=import-error
ModuleNotFoundError: No module named 'torch_directml'

@MariyanEOD
Copy link

MariyanEOD commented Sep 10, 2023

Edit: Activating the virtual environment and doing pip install torch-directml worked. The error is gone and supposedly
--backend directml or --device-id 0 did fix the problem.
However, theres now a memory problem.
RuntimeError: Could not allocate tensor with 9831040 bytes. There is not enough GPU video memory available!

@Blaz1kennBG
Copy link
Author

Apologies, MariyanEOD is my other account and i did not notice the difference.

I did a new setup from scratch,

  1. opened the venv, installed torch-directml
  2. used --backend directml --no-half --device-id 0
    And voala! Everything works now!
    Besides the memory leak from the VRAM whick makes it at about 4-5 images and SD throws the error for not enough memory

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants