Skip to content

solve DPM-Solver OOM issue#158

Merged
WuJunde merged 1 commit intoImprintLab:masterfrom
lin-tianyu:master
Mar 10, 2024
Merged

solve DPM-Solver OOM issue#158
WuJunde merged 1 commit intoImprintLab:masterfrom
lin-tianyu:master

Conversation

@lin-tianyu
Copy link
Copy Markdown
Contributor

@lin-tianyu lin-tianyu commented Mar 10, 2024

Problem Description

Many MedSegDiff users have encountered the problem like #49, #157.
It is about when we use DPM-Solver for sampling, every single sample creates a 2GB GPU memory increase, thus ending up with CUDA Out Of Memory.

Previous Solution

I once solved this problem when I downgraded my PyTorch version to 1.8.1. However, after some untrackable changes in my Python environment, the issue comes up again and Pytorch=1.8.1 won't help.

Problem Solved

After debugging, I realized that the problem is that some Cuda tensors have trouble releasing from GPU memory. Surprisingly, when I added a line of script right after DPM-Solver sampling to force the tensors detachment, the problem was solved.

Since this issue might have troubled a lot of people, I am creating this pull request. Hope it helps.

@WuJunde WuJunde merged commit 236fe40 into ImprintLab:master Mar 10, 2024
@WuJunde
Copy link
Copy Markdown
Collaborator

WuJunde commented Mar 10, 2024

Thank you for your significant contribution, Tianyu. Although I haven't encountered this issue personally, I've observed that it has annoyed many users. It appears that in certain versions of PyTorch, the samples are processed with gradients, which causes the problem. I think a "torch.no_grad()" or "model.eval()" may also resolve this issue. After all, I will proceed to merge this fix and inform users who have experienced this problem to test it. Once again, thank you for your valuable input.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants