Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 1]], which is output 0 of AsStridedBackward0, is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! #57

Open
Cihannm opened this issue Jan 19, 2024 · 0 comments

Comments

@Cihannm
Copy link

Cihannm commented Jan 19, 2024

I'm studying for a paper on machine learning, but my computer can't run the code and keeps getting errors. This code runs on a GPU, and I asked my instructor and he said it's not inplace operation. He says it could be a mismatch between python, cuda, and pytorch versions, or a difference in python's torch.cuda.float command.I don't know what difference in python's torch.cuda.float command mean. I have been puzzled by this question for a long time, I hope you can help me, thank you very much!

my python vision is 3.10.13,and torch 1.13.1,cuda 11.7

C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch\autograd_init_.py:197: UserWarning: Error detected in AddmmBackward0. Traceback of forward call that caused the error:
File "C:\Users\86135\ZIN_official-mainorg\main.py", line 90, in
train_nll, train_penalty = algo(batch_data, step, mlp, scale, mean_nll=mean_nll)
File "C:\Users\86135\ZIN_official-mainorg\algorithms\infer_irmv1.py", line 29, in call
infered_envs = self.infer_env(normed_z)
File "C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\86135\ZIN_official-mainorg\algorithms\model.py", line 202, in forward
out = self._main(input)
File "C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch\fx\traceback.py", line 57, in format_stack
return traceback.format_stack()
(Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\python_anomaly_mode.cpp:119.)
Variable.execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "C:\Users\86135\ZIN_official-mainorg\main.py", line 108, in
loss.backward() ####################
File "C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch_tensor.py", line 488, in backward
torch.autograd.backward(
File "C:\Users\86135\anaconda3\envs\pytorch\lib\site-packages\torch\autograd_init
.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 1]], which is output 0 of AsStridedBackward0, is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant