-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doesnt work for rx 7800 at all #373
Comments
should work fine on a 7800, but the version published yesterday is broken for everybody. Give it a day or two, I'm sure he'll publish a fix. |
More detail? |
I just installed webui-directml according to the guide and had several stumbling blocks:
So it seems that the Radeon GPU acceleration is disabled. |
You should add |
With --use-directml (and also with --use-directml --precision-full --no-half) I get the same failure on webui-user.bat: venv "F:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'F:\stable-diffusion-webui-directml\venv\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_shared.dll' |
With "set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half" it starts up at least: venv "F:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" To create a public link, set |
To setup the Directml webui properly (and without onnx) do the following steps: Open up a cmd and type Then delete the venv folder inside the stable-diffusion-webui folder. Then edit the webui-user.bat
Then save and relaunch the webui-user.bat |
I did exactly this. There are way too many different how-tos for automatic1111 and AMD GPUs, most of the them did not work for me (clean setup). With this one here I at least am able to generate stuff but not with my GPU :o Do I need the --medvram option if the card has 20GB VRAM? edit: maybe it works but very slow? I see about 1.6-1.8 it/s, but the one experiment installation I mentioned above was around 24 it/s :o |
OMG THANK YOU THIS ACTUALLY WORKED!!!!!!! THANK YOU!! @lshqqytiger - I can likely help with technical documentation & how-to guides in the future if you'd like assistance. I'm not a coding pro (I'm good at SQL, for what that's worth) but I make a TON of technical documentation at work for oil field workers. Oil field workers generally are not good with computers. At all. Often, they don't know there is a right & left click. Just wanted to offer that in case you need help. You're clearly talented on the technical side, but it seems last few updates have resulted in tons of confusion & failed launches for users. |
I'm not that good at documentation. I invited you as a collaborator and then you can edit wiki. |
I did a full clean of the environment including uninstalling all python versions, reinstalling everything and incorporating @CS1o's guidance and got the environment running on GPU (Radeon 7900 XTX, 24GB) acceleration. I've done fresh git pull's daily and I'm now bumping into out-of-mem errors when trying to use Hires. Fix. with 512x768 images:
--- Snip --- Isn't 24GB VRAM enough for upscaling? And can't the model be extended to regular memory? Got another 64GB of regular RAM to use as well. |
I don't recommend hires with DirectML. Please use img2img. You can get larger image using ultimate upscale and ultrasharp. |
Sadly tying to upscale in img2img gives the same OoO errors: --- snip --- --- Snip --- |
Hey, 12-24GB Vram is enough for Hires Fix, The important parts are the Settings you should use and knowing the Limit. Important for AMD Users: Everytime you get an out of GPU memory (vram) error, you need to fully restart the Webui, so the stuck vram gets cleared. If not restarted you will likely run in that error again and agin no matter which settings you try. There are 2 ways to get Hires Fix to work: First Method (recommended by me)For this you need an additional Extension called Tiled Diffusion & Tiled Vae or Multidiffusion: (The Name got changed) The following Hires Fix settings work on 6700XT (12GB) up to 7900XTX (24GB), with my settings: The Important parts are the Resolution and the Hires Steps: After that you can load the Image into img2img and use the sd upscale script or the ultimate upscale extension to get a 4k image. Second Method:First of all is to mention that using --no-half in the webui-user.bat will increase the vram usage alot. So if you run it without --no-half it should work. with these Settings for sure. |
I have a rx 7800 XT and it works with these parameters: Iterations around 3-4x, not the ~20x another commenter mentioned. Couldn't make onnx work. |
Now i oftern get an error :"RuntimeError: Could not allocate tensor with 2684354560 bytes. There is not enough GPU video memory available!". I reinstalled it, didnt hepl. Whenever i try to generate an img, it eats all may vram and never clears it. The moment i press generate it locks all my 16 gb of my vram and it stays the same even after generation is done. |
I have a 7900XTX and still use the --medvram command line. I would suggest doing that. Here's my command line if you want to copy:
Also suggest turning on Scaled Dot Product in settings -> Optimizations!!!! |
Now, for some reasons, it doesnt generate and show this error every time i try to generate 1024x512 img ). It started after latest update |
Thank you so much! This did it finally for me 🙏 |
Is there an existing issue for this?
What would your feature do ?
Can you update this git to support rx 7800
Proposed workflow
I just followed the guid till ihe end and faced the same issue, when it says that i dont have GPU
The text was updated successfully, but these errors were encountered: