Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss.backward(retain_graph=True) 报错 #16

Open
mrb957600057 opened this issue Aug 19, 2020 · 3 comments
Open

loss.backward(retain_graph=True) 报错 #16

mrb957600057 opened this issue Aug 19, 2020 · 3 comments

Comments

@mrb957600057
Copy link

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [768]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

请问您有遇到这个问题吗?

@mrb957600057
Copy link
Author

版本问题,切换到1.4没问题

@lmw0320
Copy link

lmw0320 commented May 6, 2021

版本问题,切换到1.4没问题

请教下,如果还是保持1.5的版本,要修改哪里的代码呢?

@baojunshan
Copy link

版本问题,切换到1.4没问题

请教下,如果还是保持1.5的版本,要修改哪里的代码呢?

self.mask_e = self.embedding(mask_token_id)

改成

self.mask_e = self.embedding(mask_token_id).detach()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants