Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release SD Pipeline VRAM from CUDA cache after generating samples #260

Closed
usagirei opened this issue Mar 5, 2023 · 2 comments
Closed

Release SD Pipeline VRAM from CUDA cache after generating samples #260

usagirei opened this issue Mar 5, 2023 · 2 comments

Comments

@usagirei
Copy link

usagirei commented Mar 5, 2023

As of 46aee85, when sampling images during training, CUDA keeps the (unused) pipeline data cached on VRAM on method exit, possibly causing overcommit (8.5~8.9 / 8.0 on my case), which can slow down training, as well as other applications that are also using the graphics card due to constant VRAM<->RAM swapping

Unloading the pipeline and clearing CUDA cache by adding (before exiting sample_images)

del pipeline
torch.cuda.empty_cache()

Before

torch.set_rng_state(rng_state)

Should mitigate this issue and keep the VRAM usage (7.0~7.2 / 8.0 on my case) the same as it was before calling sample_images on method exit

@kohya-ss
Copy link
Owner

kohya-ss commented Mar 6, 2023

Thank you for letting me to know. I will add these codes in next update.

@kohya-ss
Copy link
Owner

kohya-ss commented Mar 9, 2023

Fixed in the latest commit :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants