Question about VRam usage compared to other forks #1588
Replies: 4 comments 6 replies
-
cc: @Disty0 as he did all of ipex code |
Beta Was this translation helpful? Give feedback.
-
You can use IPEX instead of DirectML: https://www.technopat.net/sosyal/konu/using-stable-diffusion-webui-with-intel-arc-gpus.2593077/ Resolution: 2048x2048 Also don't go above 2032x2032. |
Beta Was this translation helpful? Give feedback.
-
Probably easier to use WSL. |
Beta Was this translation helpful? Give feedback.
-
For those who are interested, Vik from Intel Insider Discord found that VRAM leak in WSL2 is caused by unexpected behavior of
I successfully generated 100 pictures without running out of VRAM or system memory. edit: simply commenting out torch.xpu.empty_cache() is sufficient |
Beta Was this translation helpful? Give feedback.
-
I have been using this fork for quite some time now which allows me to generate images like 440 x 780 with 2x Upscale. The downside is that its rather slow due to the lack of optimization and its not getting any updates either, which can result in Extensions breaking etc.
With this version however i have to go down like a hundred pixels on both sides to not run out of memory. In both versions i have been using med vram and quad attention. I have also tried different processing methods in this version but i still run out of Vram when i use the same resolutions as in Aloereeds SD version.
I have an Arc A770 btw with 16 GB VRam.
Beta Was this translation helpful? Give feedback.
All reactions