Works on AMD RX 7900 XT on Windows, but VRAM doesn't clear after each batch #17
Replies: 10 comments 25 replies
-
Yes it doesn't clear any garbage, that's the prevalent problem with this version of SD. But it's appears to be DirectML issue. We all have to wait until microsoft developers make improvements in vram usage. |
Beta Was this translation helpful? Give feedback.
-
I was able to get it work at full speed consistently with "--opt-sub-quad-attention --no-half --precision full". Task manager shows VRAM getting maxed out after several batches but I don't receive errors and images seem to generate just fine; however I will do further testing and reply here again. |
Beta Was this translation helpful? Give feedback.
-
In the same boat you are - same GPU, although the command args you included here along with @Miraihi have helped immensely and make it so that I no longer have to close and relaunch. Will say though, you learn quickly not to necessarily trust the progress bar after a while. That being said, after about the 4th or 5th image, GPU finally crashed, but memory won't flush. |
Beta Was this translation helpful? Give feedback.
-
Ok, I'm still very new to all of this; so, take my input with a grain of salt. But I figured it worth noting some changes I've made that have made it so that I'm able to generate a plethora of images back to back without the need to restart. Again, no idea if any of this matters or not; so, I'm not excluding any changes I've made since my previous post. Pardon any "duh" moments and/or redundancy.
Restarted |
Beta Was this translation helpful? Give feedback.
-
Having the same issue. |
Beta Was this translation helpful? Give feedback.
-
Have same issue here.... i'm using on board Vega 8 with 8GB dedicated VRAM.... after several round of image generation, OOM error will appear.... |
Beta Was this translation helpful? Give feedback.
-
I'm running on a RX 6600 XT. I know its not the same card but its AMD all the same. I never had VRAM issues on SD1.4 and just recently decided to pull the directml SD 1.5. Would constantly runout of VRAM after one or two image generations. So far this has been working to keep my safetensors from going ape on my VRAM. I had no problems using SD 1.4, and upped to the 1.5 with directml. After adding this fix to my
|
Beta Was this translation helpful? Give feedback.
-
I want to get a 7900XT, but I'm thinking I might have to get away from my AMD gaming setup and dive into Nvidia cards soon for my sanity and to stop getting my VRAM overloaded. |
Beta Was this translation helpful? Give feedback.
-
I'm not holding my breath. I can't get a Linux clone and my vram keeps getting clogged up on my 66,000 XT no matter what I do even with direct ml
Sent from Yahoo Mail on Android
On Mon, Aug 21, 2023 at 1:42 PM, ***@***.***> wrote:
It seems there's a big push with current AI craze to get ROCm on Windows. I remember them making an official statement that it would have a release "This fall"... Hopefully they are right. They've published two articles on the Radeon blog specifically about Stable Diffusion, so there is at least some attention at AMD about SD.
I'm trying to get Olive/Onnnx working on my 7900XTX to test, but having tons of problems.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Running on a 7900 xt with 20 GB of VRAM.
Using no extra options (or using --medvram; doesn't make a difference to the eventual outcome).
Similar to the symptoms in the linked issue, when running multiple batches (either by setting the batch count to more than 1, or by simply running an individual batch one after another), the VRAM usage continues to increase instead of being cleared after each batch. So although I have 20GB of VRAM, while the first few batches run quite quickly, it eventually reaches a point where image generation slows down drastically (from ~5 iterations/second to ~1.3 seconds/iteration).
I'm guessing it's some kind of memory leak, but I don't know if it's related to the already solved issue in the original webui or not:
AUTOMATIC1111#1147
Beta Was this translation helpful? Give feedback.
All reactions