solve DPM-Solver OOM issue#158
Merged
WuJunde merged 1 commit intoImprintLab:masterfrom Mar 10, 2024
lin-tianyu:master
Merged
Conversation
Collaborator
|
Thank you for your significant contribution, Tianyu. Although I haven't encountered this issue personally, I've observed that it has annoyed many users. It appears that in certain versions of PyTorch, the samples are processed with gradients, which causes the problem. I think a "torch.no_grad()" or "model.eval()" may also resolve this issue. After all, I will proceed to merge this fix and inform users who have experienced this problem to test it. Once again, thank you for your valuable input. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem Description
Many MedSegDiff users have encountered the problem like #49, #157.
It is about when we use DPM-Solver for sampling, every single sample creates a 2GB GPU memory increase, thus ending up with CUDA Out Of Memory.
Previous Solution
I once solved this problem when I downgraded my PyTorch version to 1.8.1. However, after some untrackable changes in my Python environment, the issue comes up again and Pytorch=1.8.1 won't help.
Problem Solved
After debugging, I realized that the problem is that some Cuda tensors have trouble releasing from GPU memory. Surprisingly, when I added a line of script right after DPM-Solver sampling to force the tensors detachment, the problem was solved.
Since this issue might have troubled a lot of people, I am creating this pull request. Hope it helps.