-
-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Constant out of memory errors on 12GB rtx 3060 no matter what settings I use. #43
Comments
Uncheck train text encoder if you haven't tried that. |
I'd also suggest unchecking "train text encoder". It uses a big chunk of VRAM. You can also set "save checkpoint every" and "save image every" to 0 to ensure that doesn't try to use any VRAM. |
@d8ahazard @rabidcopy Doesn't even manage to get to the Out of Memory part if I uncheck the train text encoder option. Instead throws out: |
same card, same problem |
same problem with 080ti 11GB tried everything, for that matter cpu doesnt work either. |
See here #37 for a temp fix. |
same problem, when uncheck the train text encoder option, nothing changed still same |
I havnt use it yet, but with this GUI https://github.com/smy20011/dreambooth-gui it works well for 3060, even with text encoding ON. |
I get this error under RTX 3060 12 GB, when after saved ckpt file: Exception saving preview: tensors used as indices must be long, byte or bool tensors Traceback (most recent call last): Generate a preview image every N steps to 0 no helped. :( @d8ahazard Didn't this latest patch break something? |
I have a RTX3060 12GB, and I'm getting the same OOM and errors... |
After making the file edit noted in #37 to delete "dtype=weight_dtype", restarting server, and unchecking don't cache latents, unchecking train text encoder, and switching mixed precision to fp16, and setting generate preview to a really high number, set it to save checkpoint at the same number as my training steps, it's finally training! First time I've been able to Dreambooth train locally on my 3060 12GB. Will take about a half hour to train 2000 steps. |
Unfortunately I got a different error at the end of training, after reaching the final training step, about not being able to parse config.json, and unable to connect to huggingface.co... anyone know what that might be about? |
I also got an error when generating the image: Exception saving preview: tensors used as indices must be long, byte or bool tensors Error completing request |
I got the same. Training starts successfully after the tweaks but if it pauses or stops for anything (including genrating sample images) it crashes: |
+check 8bit adam |
8bit adam is checked, issue persists |
According to this page AUTOMATIC1111/stable-diffusion-webui#4436 there is a way to unload the VAE and save 2+ Gb, but I did not manage to find the setting. |
I want to report that I just completed my first successful training on 3060 12GB... commit c1702f1. So it can be done! |
Could you be so kind as to document your settings here: I'd like to create a central place for folks to discuss tuning and setup, and I think your success story might be a good starting point. :D |
Used every single "VRAM saving" setting there is. 8bit adam, dont cache latents, gradient checkpointing, fp16 mixed precision, etc. Even dropped the training resolution to abysmally low resolutions like 384 just to see if it would work. Same out of memory errors.
Isn't this supposed to be working with 12GB cards?
The text was updated successfully, but these errors were encountered: