-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pyramidnet Issue #65
Comments
I'm experiencing the same issue. |
I figured out a way to prevent this. If you look at if residual_channel != shortcut_channel:
padding = torch.autograd.Variable(
torch.cuda.FloatTensor(batch_size, residual_channel - shortcut_channel, featuremap_size[0],
featuremap_size[1]).fill_(0))
out += torch.cat((shortcut, padding), 1) It is creating variables on fly, which hopefully
if residual_channel != shortcut_channel:
out[:, :shortcut.size(1)] = out[:, :shortcut.size(1)] + shortcut
else:
out = out + shortcut To be honest, sometimes this fails too, but it is not because of the memory leak, and I have not figured out why this approach fails, but it would be helpful if anyone knows why. Update: I figured out the problem. Here's one that does not throw any errors to the best of my knowledge and also does not leak memory: if residual_channel != shortcut_channel:
out = out.clone()
out[:, :shortcut.size(1)] = out[:, :shortcut.size(1)] + shortcut
else:
out = out + shortcut |
Hi,
I am currently trying to utilize the PyramidNet + ShakeDrop. However I am getting the following error:
RuntimeError: Output 0 of ShakeDropFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can remove this warning by cloning the output of the custom Function.
If I try to fix the error by changing some lines, the memory usage seems to increase a lot. So I was wondering whether you also encountered the following errors.
Thank you!
The text was updated successfully, but these errors were encountered: