You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,DrSleep,
I want to train model using my own dataset,and I just want to 2 number class,when I changed the parameters in config.py,an error occured as flowing:
size mismatch for module.clf_conv.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([2]).
in addition,when my label image is binarized image,is the reason that the error occured?
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (2,2) and requested shape (3,2).
hope your help,thanks very much.
The text was updated successfully, but these errors were encountered:
size mismatch for module.clf_conv.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([2]).
this indicates that you are trying to load in the checkpoint pre-trained on NYU (with 40 classes).
in addition,when my label image is binarized image,is the reason that the error occured?
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (2,2) and requested shape (3,2).
What do you mean by binarised label image? This should not affect anything as long as your masks are of the shape HxW.
Knowing where the ValueError is coming from would be helpful
Hi,DrSleep,
I want to train model using my own dataset,and I just want to 2 number class,when I changed the parameters in config.py,an error occured as flowing:
The text was updated successfully, but these errors were encountered: