Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [100, 1]], which is output 0 of TBackward, is at version 3; expected version 2 instead. #9

Open
TJ2333 opened this issue May 31, 2021 · 2 comments

Comments

@TJ2333
Copy link

TJ2333 commented May 31, 2021

env:
torch 1.8.1+cu111

Error:
UserWarning: Error detected in AddmmBackward. Traceback of forward call that caused the error:
File "", line 1, in
File "E:\A\envs\gym\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "E:\A\envs\gym\lib\multiprocessing\spawn.py", line 118, in _main
return self._bootstrap()
File "E:\A\envs\gym\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "E:\A\envs\gym\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "Pytorch-RL\Pytorch-DPPO-master\train.py", line 155, in train
mu_old, sigma_sq_old, v_pred_old = model_old(batch_states)
File "E:\A\envs\gym\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "Pytorch-DPPO-master\model.py", line 53, in forward
v1 = self.v(x3)
File "E:\A\envs\gym\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "E:\A\envs\gym\lib\site-packages\torch\nn\modules\linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "E:\A\envs\gym\lib\site-packages\torch\nn\functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
(Triggered internally at ..\torch\csrc\autograd\python_anomaly_mode.cpp:104.)
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
Process Process-4:
Traceback (most recent call last):
File "E:\A\envs\gym\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "E:\A\envs\gym\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self.kwargs)
File "Pytorch-DPPO-master\train.py", line 197, in train
total_loss.backward(retain_graph=True)
File "E:\A\envs\gym\lib\site-packages\torch\tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "E:\A\envs\gym\lib\site-packages\torch\autograd_init
.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [100, 1]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

i googled and some says its caused by inplcae op ,but i cant seems to find any,i havent try to downgrade torch version,but is there any solutions that i dont need to downgrade ?

@xiaomeng9532
Copy link

Hello, have you solved this problem?

@TJ2333
Copy link
Author

TJ2333 commented Sep 16, 2021

Hello, have you solved this problem?

Nope,but found an other version of dppo code ,https://github.com/TianhongDai/distributed-ppo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants