-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add AvoidOOM to avoid OOM #7434
Conversation
If we can only set |
Codecov Report
@@ Coverage Diff @@
## dev #7434 +/- ##
==========================================
- Coverage 65.09% 64.50% -0.59%
==========================================
Files 357 360 +3
Lines 28852 29233 +381
Branches 4891 4954 +63
==========================================
+ Hits 18782 18858 +76
- Misses 9061 9370 +309
+ Partials 1009 1005 -4
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Update the logic in AvoidOOM, which defaults to return source type and device without any interface. This makes the codes look simpler. |
* [Feature] Add AvoidOOM to avoid OOM * support multiple outputs * add docs in faq * add docs in faq * fix logic * minor fix * minor fix * minor fix * minor fix * add the tutorials of using avoidoom as a decorator * minor fix * add convert tensor type test unit * minor fix * minor fix
* [Feature] Add AvoidOOM to avoid OOM * support multiple outputs * add docs in faq * add docs in faq * fix logic * minor fix * minor fix * minor fix * minor fix * add the tutorials of using avoidoom as a decorator * minor fix * add convert tensor type test unit * minor fix * minor fix
* [Feature] Add AvoidOOM to avoid OOM * support multiple outputs * add docs in faq * add docs in faq * fix logic * minor fix * minor fix * minor fix * minor fix * add the tutorials of using avoidoom as a decorator * minor fix * add convert tensor type test unit * minor fix * minor fix
* [Feature] Add AvoidOOM to avoid OOM * support multiple outputs * add docs in faq * add docs in faq * fix logic * minor fix * minor fix * minor fix * minor fix * add the tutorials of using avoidoom as a decorator * minor fix * add convert tensor type test unit * minor fix * minor fix
First, trying to change
torch.mm
totorch.einsum
to avoid OOM:before change, mAP: 0.331
after change, mAP: 0.331
But found it cannot save GPU memory.
To avoid OOM, we add a class, which can try to convert inputs to FP16 and CPU if got a PyTorch's CUDA Out of Memory error.
It will do the following steps:
torch.cuda.empty_cache()
.TODO:
Close: #6908