-
Notifications
You must be signed in to change notification settings - Fork 494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA OOM Error #245
Comments
Hi @zwacke , it depends on what you've set for n_steps. Integrated Gradients needs to compute gradients at each of the n_steps points, so one input image expands to a batch of n_step evaluations, which is likely causing the out of memory issue. You can try either reducing n_steps to a smaller value or otherwise using the internal_batch_size argument in IG, which splits the evaluations into batches with at most internal_batch_size examples. Setting internal_batch_size to a smaller value (corresponding to a batch size that can fit in memory) while keeping n_steps the same should also work. Let us know if this resolves the issue! |
Hi @zwacke, did Vivek's suggestion help ? If so can we close the issue ? |
Yes, thank you it did. Defaulting to a working internal_batch_size seems to be the recommendable approach. If I get it right, this especially helps when wrapping an attribution method in NoiseTunnel, or any method that introduces further expansion of the overall input batch size. |
Nice! Glad that it helped. Feel free to close the issue if it's fixed! |
I encountered the same problem today. By reducing |
Hi,
I am currently integrating Captum into my deep learning tool kit, thx for providing this lib.
When I try to run IntegratedGradients on a standard densenet201 model that is on a cuda device (11GB vram), I am getting an out-of-memory error even for one input image.
Just a quick check: Is this normal behaviour?
The text was updated successfully, but these errors were encountered: