Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss.backward memory #6

Closed
eric8576 opened this issue Feb 11, 2023 · 2 comments
Closed

loss.backward memory #6

eric8576 opened this issue Feb 11, 2023 · 2 comments

Comments

@eric8576
Copy link

eric8576 commented Feb 11, 2023

Hi, my computer GPU is 24GB, but when code run to epoch 2, loss.backward() will show CUDA out of memory.the batch size is one. I do bot know how to fix this! thanks:)

@aaronkujawa
Copy link
Collaborator

aaronkujawa commented Feb 12, 2023

Hi,
the current training patch size was maximised for a 32GB GPU. You can try to reduce the training patch size, called pad_crop_shape in the code

self.pad_crop_shape = [384, 384, 64]
and accordingly the sliding_window_inferer_roi_size
self.sliding_window_inferer_roi_size = [384, 384, 64]
Values should be multiples of 2, for example you can try [256, 256, 64].

@eric8576
Copy link
Author

eric8576 commented Feb 13, 2023

It work!Thank you for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants