You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "main_experiment.py", line 278, in <module>
run(args, kwargs)
File "main_experiment.py", line 189, in run
tr_loss = train(epoch, train_loader, model, optimizer, args)
File ".../sylvester-flows/optimization/training.py", line 39, in train
loss.backward()
File "//anaconda/envs/dl/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "//anaconda/envs/dl/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
I am using PyTorch version 1.0.0 and did not modify the code.
The text was updated successfully, but these errors were encountered:
I've just checked and for pytorch 0.3.0 this error does not occur either on cpu or gpu.
As described in the README file under requirements, the code won't work with pytorch 1.0.0 without making at least some changes to the loss function (particularly nn.BCELoss). Last time I checked it will silently ignore the fact that default flags have been changed for later versions, and throw errors in other parts of the code that seem unrelated. I currently don't have time to change the code to be compatible with pytorch 1.0.0, but I will hopefully find some time soon.
Hi Rianne,
I'm trying to run the default experiment on
cpu
with a small latent space dimension (z=5):python main_experiment.py -d mnist --flow no_flow -nc --z_size 5
Which unfortunately gives the following error:
I am using PyTorch version 1.0.0 and did not modify the code.
The text was updated successfully, but these errors were encountered: