Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA OOM Error #245

Closed
zwacke opened this issue Jan 20, 2020 · 5 comments
Closed

CUDA OOM Error #245

zwacke opened this issue Jan 20, 2020 · 5 comments

Comments

@zwacke
Copy link

zwacke commented Jan 20, 2020

Hi,

I am currently integrating Captum into my deep learning tool kit, thx for providing this lib.

When I try to run IntegratedGradients on a standard densenet201 model that is on a cuda device (11GB vram), I am getting an out-of-memory error even for one input image.

Just a quick check: Is this normal behaviour?

@vivekmig
Copy link
Contributor

Hi @zwacke , it depends on what you've set for n_steps. Integrated Gradients needs to compute gradients at each of the n_steps points, so one input image expands to a batch of n_step evaluations, which is likely causing the out of memory issue.

You can try either reducing n_steps to a smaller value or otherwise using the internal_batch_size argument in IG, which splits the evaluations into batches with at most internal_batch_size examples. Setting internal_batch_size to a smaller value (corresponding to a batch size that can fit in memory) while keeping n_steps the same should also work. Let us know if this resolves the issue!

@NarineK
Copy link
Contributor

NarineK commented Jan 22, 2020

Hi @zwacke, did Vivek's suggestion help ? If so can we close the issue ?

@zwacke
Copy link
Author

zwacke commented Jan 23, 2020

Yes, thank you it did. Defaulting to a working internal_batch_size seems to be the recommendable approach. If I get it right, this especially helps when wrapping an attribution method in NoiseTunnel, or any method that introduces further expansion of the overall input batch size.

@NarineK
Copy link
Contributor

NarineK commented Jan 23, 2020

Nice! Glad that it helped. Feel free to close the issue if it's fixed!

@zwacke zwacke closed this as completed Jan 23, 2020
@tranvnhan
Copy link

I encountered the same problem today. By reducing n_steps to a smaller value, it works now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants