You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However I got an error as:
Traceback (most recent call last): | 0/130.0 [00:00<?, ?it/s]
File "main.py", line 429, in
train_loss, iterations = train(args=args, epoch=epoch, start_iteration=global_iteration, data_loader=train_loader, model=model_and_loss, optimizer=optimizer, logger=train_logger, offset=offset)
File "main.py", line 295, in train
loss_val.backward()
File "/home/hazhang/.conda/envs/virt_optflow/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/hazhang/.conda/envs/virt_optflow/lib/python3.6/site-packages/torch/autograd/init.py", line 91, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function MulBackward0 returned an invalid gradient at index 1 - expected type torch.cuda.FloatTensor but got torch.FloatTensor
It seems that there is a problem with the type of tensor in the backward propagation. Did anyone have the same issue?
Thank you for your help:)
The text was updated successfully, but these errors were encountered:
If @Kolin96 帅学长's solution didn't work, you can try to modify the self.loss_weights in __init__() function in MultiScale class to self.loss_weights = torch.cuda.FloatTensor([(l_weight / 2 ** scale) for scale in range(self.numScales)]).
Hi everyone,
I tried this command below:
python main.py --batch_size 8 --model FlowNet2C --optimizer=Adam
--optimizer_lr=1e-4 --loss=MultiScale --loss_norm=L1
--loss_numScales=5 --loss_startScale=4 --optimizer_lr=1e-4 --crop_size 384 512
--training_dataset MpiSintelFinal --training_dataset_root
/path/to/mpi-sintel/final/dataset
--validation_dataset MpiSintelClean --validation_dataset_root
/path/to/mpi-sintel/clean/dataset
However I got an error as:
Traceback (most recent call last): | 0/130.0 [00:00<?, ?it/s]
File "main.py", line 429, in
train_loss, iterations = train(args=args, epoch=epoch, start_iteration=global_iteration, data_loader=train_loader, model=model_and_loss, optimizer=optimizer, logger=train_logger, offset=offset)
File "main.py", line 295, in train
loss_val.backward()
File "/home/hazhang/.conda/envs/virt_optflow/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/hazhang/.conda/envs/virt_optflow/lib/python3.6/site-packages/torch/autograd/init.py", line 91, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function MulBackward0 returned an invalid gradient at index 1 - expected type torch.cuda.FloatTensor but got torch.FloatTensor
It seems that there is a problem with the type of tensor in the backward propagation. Did anyone have the same issue?
Thank you for your help:)
The text was updated successfully, but these errors were encountered: