Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch version #21

Closed
hhheeexxxuuu opened this issue Jan 30, 2021 · 3 comments
Closed

Torch version #21

hhheeexxxuuu opened this issue Jan 30, 2021 · 3 comments

Comments

@hhheeexxxuuu
Copy link

hhheeexxxuuu commented Jan 30, 2021

Hello, I install the version of torch is 1.7.0, however, there is a mistake: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 512, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). I modified inplace = True to inplace = false, the error is still existing. Can you give me some advice?

@geekyutao
Copy link
Owner

geekyutao commented Jan 31, 2021

Hello, I install the version of torch is 1.7.0, however, there is a mistake: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 512, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). I modified inplace = True to inplace = false, the error is still existing. Can you give me some advice?

Hi, I guess it's the pytorch version issue. In higher versions, it is not allowed shorthands such as "loss += loss1" when calculating losses. Try "loss = loss + loss1" instead of the shorthand. If it still not work, then down-grade the version to 1.1 or 1.2. Thanks.

@hhheeexxxuuu
Copy link
Author

I will try, thank you!

@geekyutao geekyutao pinned this issue Mar 3, 2021
@geekyutao
Copy link
Owner

Hello, I install the version of torch is 1.7.0, however, there is a mistake: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 512, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). I modified inplace = True to inplace = false, the error is still existing. Can you give me some advice?

Hi, there is another solution. d_loss should be back-propagated immediately once you have calculated it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants