Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Will webui support AMD graphics cards in the future? #287

Closed
thetwo222 opened this issue Sep 11, 2022 · 25 comments
Closed

Will webui support AMD graphics cards in the future? #287

thetwo222 opened this issue Sep 11, 2022 · 25 comments

Comments

@thetwo222
Copy link

Now I regret not buying NVIDIA's graphics card…

@cryzed
Copy link
Collaborator

cryzed commented Sep 11, 2022

It does support it, just replace:

pip install torch --extra-index-url https://download.pytorch.org/whl/cu113

with

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1

and start webui.py with --precision full --no-half.

@thetwo222
Copy link
Author

It does support it, just replace:

pip install torch --extra-index-url https://download.pytorch.org/whl/cu113

with

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1

and start webui.py with --precision full --no-half.

Thank you for your answer!

Where can I find this sentence?
pip install torch --extra-index-url https://download.pytorch.org/whl/cu113

@cryzed
Copy link
Collaborator

cryzed commented Sep 12, 2022

Initially I found this hint here. There's a link to this site in this guide. However, ROCm is only available on Linux -- so on Windows you might have to run it in a Docker container (as explained in the first link).

@AUTOMATIC1111
Copy link
Owner

out of my scope; if someone wants to add a section to readme please send a PR

@GreenLandisaLie
Copy link

Forget Rocm, only a few cards support it anyway so calling it 'AMD support' is quite the overstatement.
The diffusers repo has added onnx support. Haven't tested yet but seems to be several times faster than pytorch in CPU-only. It also supports AMD GPU's but the optimization is lacking. Doesn't have decent support for low VRAM cards - although it uses RAM as shared memory when the GPU runs out of memory.
We won't get decent AMD support until the guys at stability AI - in partnership with AMD - do something about it. But in the meantime it would be great if we could add onnx support to all the available pipelines. Currently - even in diffusers - onnx is only working for txt2img.

@cryzed
Copy link
Collaborator

cryzed commented Sep 14, 2022

Works fine on my AMD GPU, but thanks for the advice.

@cryzed
Copy link
Collaborator

cryzed commented Sep 14, 2022

Anyways, a short tutorial for those that want to run it on their AMD GPU (should work on a Linux and Windows host, but I didn't test on Windows myself):

Pull the latest rocm/pytorch Docker image, start the image and attach to the container (taken from the rocm/pytorch documentation): docker run -it --network=host --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $HOME/dockerx:/dockerx rocm/pytorch

Execute the following inside the container:

cd /dockerx
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
python -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip wheel
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' REQS_FILE='requirements.txt' python launch.py --precision full --no-half

Subsequent runs will only require you to restart the container, attach to it again and execute the following inside the container. Find the container name from this listing: docker container ls --all, select the one matching the rocm/pytorch image, restart it: docker container restart <container id>, attach to it: docker exec -it <container id> bash.

cd /dockerx/stable-diffusion-webui
# Optional: "git pull" to update the repository
source venv/bin/activate
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' REQS_FILE='requirements.txt' python launch.py --precision full --no-half

The /dockerx folder should be accessible in your home directory on Linux and Windows under the name dockerx.

Works without issues for me and the performance is around a GTX 3090 for my RX 6900 XT (according to some graph I saw floating around).

Quoting from the rocm/pytorch Docker image documentation:

The docker images been hosted in this registry will run on gfx900(Vega10-type GPU - MI25, Vega56, Vega64), gfx906 (Vega20-type GPU - MI50, MI60), gfx908 (MI100) and gfx90a (MI210/MI250/MI250x).

@cryzed
Copy link
Collaborator

cryzed commented Sep 14, 2022

@AUTOMATIC1111 If you want to, I think you can just copy and paste these instructions into the README for AMD users.

@DidgetMidget
Copy link

Cant we get this to work with this? (on windows) pytorch-directml

@seoeaa
Copy link

seoeaa commented Sep 19, 2022

works great on my computer , radeon rx 6900 xt

2022-09-20-00-20-39

2022-09-20-00-20-19

@seoeaa
Copy link

seoeaa commented Sep 20, 2022

carefully look at the photo on it there are parameters of the video card and computer

@cryzed cryzed closed this as completed Sep 20, 2022
@cprivitere
Copy link

cprivitere commented Oct 1, 2022

The docker solution outlined above does NOT work on windows. The GPU can't be passed through.

@skerit
Copy link

skerit commented Oct 19, 2022

I can't get it to work on Linux. Natively or in docker.
Docker gives me this: "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"

When running natively it keeps complaining about nvidia, even though I followed the AMD wiki page:

Launching Web UI with arguments: --precision full --no-half
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled

@cprivitere
Copy link

You need to supply an environment variable as by default ROCm doesn't recognize consumer cards as compatible. export HSA_OVERRIDE_GFX_VERSION=10.3.0

@cprivitere
Copy link

Also you can drop the "--precision full --no-half" stuff now, that's not needed anymore and removing it makes things run much faster.

@KiameV
Copy link

KiameV commented Oct 23, 2022

Having issues getting it started too, looks like it still wants to use cuda
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' HSA_OVERRIDE_GFX_VERSION=10.3.0 REQS_FILE='requirements.txt' python launch.py --skip-torch-cuda-test

Python 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0]
Commit hash: 6bd6154a92eb05c80d66df661a38f8b70cc13729
Installing requirements for Web UI
Launching Web UI with arguments:
/home/.../.local/lib/python3.10/site-packages/torch/cuda/__init__.py:83: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at  ../c10/hip/HIPFunctions.cpp:110.)
  return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
  File "/home/.../stable-diffusion-webui/launch.py", line 206, in <module>
    start_webui()
  File "/home/.../stable-diffusion-webui/launch.py", line 200, in start_webui
    import webui
  File "/home/.../stable-diffusion-webui/webui.py", line 12, in <module>
    from modules import devices, sd_samplers
  File "/home/.../stable-diffusion-webui/modules/sd_samplers.py", line 10, in <module>
    from modules import prompt_parser, devices, processing, images
  File "/home/.../stable-diffusion-webui/modules/processing.py", line 10, in <module>
    import cv2
  File "/home/.../.local/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module>
    bootstrap()
  File "/home/.../.local/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap
    native_module = importlib.import_module("cv2")
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libGL.so.1: cannot open shared object file: No such file or directory

Specifically: return torch._C._cuda_getDeviceCount() > 0

@Enferlain
Copy link

Anyways, a short tutorial for those that want to run it on their AMD GPU (should work on a Linux and Windows host, but I didn't test on Windows myself):

Pull the latest rocm/pytorch Docker image, start the image and attach to the container (taken from the rocm/pytorch documentation): docker run -it --network=host --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $HOME/dockerx:/dockerx rocm/pytorch

Execute the following inside the container:

cd /dockerx
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
python -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip wheel
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' REQS_FILE='requirements.txt' python launch.py --precision full --no-half

Subsequent runs will only require you to restart the container, attach to it again and execute the following inside the container. Find the container name from this listing: docker container ls --all, select the one matching the rocm/pytorch image, restart it: docker container restart <container id>, attach to it: docker exec -it <container id> bash.

cd /dockerx/stable-diffusion-webui
# Optional: "git pull" to update the repository
source venv/bin/activate
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' REQS_FILE='requirements.txt' python launch.py --precision full --no-half

The /dockerx folder should be accessible in your home directory on Linux and Windows under the name dockerx.

Works without issues for me and the performance is around a GTX 3090 for my RX 6900 XT (according to some graph I saw floating around).

Quoting from the rocm/pytorch Docker image documentation:

The docker images been hosted in this registry will run on gfx900(Vega10-type GPU - MI25, Vega56, Vega64), gfx906 (Vega20-type GPU - MI50, MI60), gfx908 (MI100) and gfx90a (MI210/MI250/MI250x).

Tried following this for Windows but didn't work for me. I pulled rocm/pytorch latest but when I do

docker run -it --network=host --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $HOME/dockerx:/dockerx rocm/pytorch`

I get this. Any fix or do I try a Linux vm?

Error response from daemon: error gathering device information while adding custom device "/dev/kfd": no such file or directory.

@averad
Copy link

averad commented Oct 24, 2022

As of Diffusers 0.6.0 the Diffusers Onnx Pipeline Supports Txt2Img, Img2Img and Inpainting for AMD cards

Examples: https://gist.github.com/averad/256c507baa3dcc9464203dc14610d674

Would it be possible to include the Onnx Pipeline now that Img2Img and Inpainting are working?

@cprivitere
Copy link

Just so folks have proper expectations, the ONNX pipeline on AMD on windows, while better than just running on your CPU, is still 2-3 times as long as the ROCm pipeline on Linux.

@Enferlain
Copy link

And it has less functionality

@averad
Copy link

averad commented Oct 24, 2022

Works out of the box without compiling rcom to work with specific cards, installing docker or dual booting.

Helps people like:
#1220

@Gamination
Copy link

Error response from daemon: error gathering device information while adding custom device "/dev/kfd": no such file or directory.

doesn't work

@TheSkinnyRat
Copy link

Error response from daemon: error gathering device information while adding custom device "/dev/kfd": no such file or directory.

doesn't work

i had the same issue, fixed by installing docker engine instead of docker desktop and run command with sudo:

sudo docker run -it --network=host --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $HOME/dockerx:/dockerx rocm/pytorch

reference: What is the difference between Docker Desktop for Linux and Docker Engine.

@Xeroxxx
Copy link

Xeroxxx commented Mar 18, 2023

I just want to add this for documentation purposes.

It works for me with the AMD MI25 perfectly fine. However you have to export TORCH_COMMAND before, it does not work inline for me. I used latest 5.4.2.

export TORCH_COMMAND="pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.4.2"
First txt2img may take 2-3 minutes. Afterwards it works flawless.

If you don't export it before or export a wrong version, it will download cuda pytorch libs and you get "gpu not found".

@giriss
Copy link

giriss commented Mar 23, 2023

❤️
It works amazingly with my RX 6900 XT, around 6it/s (which is super awesome!). But NOT on Windows (which is also awesome since I am a Linux fanboy). Follow AMD's guide on how to install GPU driver for Linux, it's super simple.

PS: I am not using the docker, instead running it natively with ROCm.

Following the official guide for ROCm on AMD's website.
https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.4.2/page/How_to_Install_ROCm.html

I am using ROCm 5.4.2 since PyTorch have support for it.
Screenshot from 2023-03-23 17-43-48

Then in webui.sh replace 5.2 with 5.4.2
Screenshot from 2023-03-23 17-45-33

Once done, run webui.sh

Hope it helps guys!

Sashimimochi pushed a commit to Sashimimochi/stable-diffusion-webui that referenced this issue Apr 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests