You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
~/.local/lib/python3.10/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph,
grad_variables, inputs)
198 # some Python versions print out the first line of a multi-line function
199 # calls in the traceback and some print out the last line
--> 200 Variable.execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
201 tensors, grad_tensors, retain_graph, create_graph, inputs,
202 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [8, 64, 24, 24]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Torch does not allow the += operator on tensors anymore.
To fix it, you need to change two lines of code that are considered inplace operations in the newer torch version:
decoding.py l.136: x += connection => x = x + connection
encoding.py l.146: x += connection => x = x + connection
I made a clone of this rep and fixed the error locally this way, but I did not have the permission to push.
Best Regards,
Johannes Dollinger
The text was updated successfully, but these errors were encountered:
Hi, running the following script with torch=2.0.0:
import unet
import torch
loss_fn = torch.nn.BCEWithLogitsLoss()
input = torch.ones([8, 3, 24, 24])
targets = torch.ones([8, 10, 24, 24])
unet_model = unet.UNet2D(in_channels=3, out_classes=10, residual=True, num_encoding_blocks=2)
out = unet_model(input)
loss = loss_fn(targets, out)
loss.backward()
gave the following error:
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_5969/4012461463.py in
5 out = unet_model(input)
6 loss = loss_fn(targets, out)
----> 7 loss.backward()
~/.local/lib/python3.10/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
485 inputs=inputs,
486 )
--> 487 torch.autograd.backward(
488 self, gradient, retain_graph, create_graph, inputs=inputs
489 )
~/.local/lib/python3.10/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph,
grad_variables, inputs)
198 # some Python versions print out the first line of a multi-line function
199 # calls in the traceback and some print out the last line
--> 200 Variable.execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
201 tensors, grad_tensors, retain_graph, create_graph, inputs,
202 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [8, 64, 24, 24]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Torch does not allow the += operator on tensors anymore.
To fix it, you need to change two lines of code that are considered inplace operations in the newer torch version:
decoding.py l.136: x += connection => x = x + connection
encoding.py l.146: x += connection => x = x + connection
I made a clone of this rep and fixed the error locally this way, but I did not have the permission to push.
Best Regards,
Johannes Dollinger
The text was updated successfully, but these errors were encountered: