-
Notifications
You must be signed in to change notification settings - Fork 26k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] I am running out of memory after generating several batches #258
Comments
For more info I am using these arguments for my 8GB VRAM |
the option to disable grids is right there in settings. |
I have a RTX 3090 and even with no previous iteration , a single batch of 896x896 is resulting is cude out of memory on my side while i didn't face this issue with other repo . If i use set COMMANDLINE_ARGS=--opt-split-attention with yesterday built, it works but it's very slow... Not sure hat to do now excepted getting back to another repo |
This is a recurring problem that I've had both with this and hlky's repo. Generations work fine but sometimes I get a 2.0 GB VRAM spike at the very end that causes OOM (and prevents the images from saving). I've tried disabling GFPGAN and grids but it doesn't make any difference, yet I'm sure it's one or both of those that's causing it because I've never had this problem on basujindal's repo (because it doesn't have GFPGAN or grids). There's a definite memory leak or problem somewhere that hasn't been found yet. |
I don't know how I missed seeing the option to disable grids. But I can confirm it did not make a difference in my original issue. I still run out of memory after disabling it. |
@AUTOMATIC1111 Thanks for great tools! |
I can't set my resolution to anything above 576x576 else I get the cuda out of memory error. But over time I have to set it back down to 512 again. And over time, I can't do higher batch counts and sizes over 1 though Ican when I first start using it. Happens over time. This is on a 3070RTX TI 8GB. |
I've tried your approach but I get: stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 10, in |
@BrahRah Something wrong with your attention.py there is no line 10 with "import from ldm.modules.attention import LinearAttention" but model.py has this line.
|
This seems to have made a big difference, thanks! Seems much quicker now and I can do higher resolutions. |
duplicate of #170 |
I noticed this issue in two different occurrences.
If I do the same as #2^ but just one image at a time, I can create countless images without any issues.
Shouldn't the memory be freed up for the next batch each time? Meaning if I can produce one batch of 3 images, it should not run out of memory attempting the next batch right?
What I am actually wondering is if it needs memory to create images in the "txt2img-grids" directory. I notice its generating a grid of my batches combined. Maybe as that gets larger it is needing more memory just for that? That might explain why I can generate countless amounts of lower res images one at a time without issue, but not the highest resolution images more than once. Or perhaps this is some other kind of bug.
Is there any way to disable generating "txt2img-grids" files? I haven't been able to find out what those are even for, and if its using any more memory or resources I certainly don't want it generating those. Not to mention taking up storage space.
I would also like to point out I have no issues creating countless batches of multiple images using the original Stable Diffusion. It only seems to happen with this repo.
The text was updated successfully, but these errors were encountered: