New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
input and target size don't match for loss function #66
Comments
Note that resnet50_dilated8 an Upernet should not be combined. |
But shouldn't Resnet101 and Upernet work in that case? After all it is an example you are giving in your documentation. This still leads to an error
|
Resnet101 and Upernet works on my side. Are you missing any arguments we provided? |
Thanks, yes that was the issue. I was not adapting the padding constant and the down-sampling rate when changing models. |
@heinzermch I encountered the same problem. how did you solve adapt the padding constant and down-sampling rate? |
I run with resnet50_dilated8-ppm_bilinear_deepsup. The "pred" after self.decoder has the shape of 2022828 (2 class problem). However, the input of network is 202224224 and the error is mismatch in loss calculation. Is it due to the padding constant and downsampling rate issue and how can I adjust padding constant and down-sampling rate?
|
I just used the parameters as described in the Readme:
|
Thanks for your answer.
and the error is the same with above. I evaluate the output of line32 in '‘’models.py‘ file
The output of self.encoder is the list including 4 tensors.
The output of self.decoder is (I am doing 2 class segmentation).
It seems that the erros exists in self.encoder and the expected output should be a list, where the first element is a tensor with (20,256,224,224) So that the decoder output will have the shape (20,256,224,224)? Is there anything I missed? Thanks again for the solutions! |
any solution for this problem? |
I have the exact same problem:
|
RuntimeError: input and target shapes do not match: input [3242340 x 1], target [1 x 1] at I set a batch_size=256 in torch.utils.data.DataLoader() result : |
Getting the same error: RuntimeError: input and target shapes do not match: input [128 x 1], target [128] at /opt/conda/conda-bld/pytorch-cpu_1532576596369/work/aten/src/THNN/generic/MSECriterion.c:12 |
I cannot reproduce this error, I can run the following command successfully: @deeponcology @shahaniket @Macfa @xmengli999 Can you show the commands you are running? |
Hi! If the combination of resnet50 and ppm work? When I run this I get error(don’t match). Thank you. |
It looks like every combination but the default resnet50_dilated8/ppm_bilinear_deepsup leads to a mismatch in size between the input and the target of the loss function. I'm a bit mystified, I did not change any of the models. What I adapted was the number of labels (to 8 as one can see below).
Encoder: resnet50_dilated8. Decoder: upernet
RuntimeError: input and target batch or spatial sizes don't match: target [1 x 85 x 106], input [1 x 8 x 170 x 212] at /opt/conda/conda-bld/pytorch_1524582441669/work/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu:24
Encoder: Resnet101. Decoder: ppm_bilinear_deepsup
return torch._C._nn.nll_loss2d(input, target, weight, size_average, ignore_index, reduce) RuntimeError: input and target batch or spatial sizes don't match: target [1 x 75 x 94], input [1 x 8 x 19 x 24] at /opt/conda/conda-bld/pytorch_1524582441669/work/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu:24
Encoder: Resnet101. Decoder: Upernet
RuntimeError: input and target batch or spatial sizes don't match: target [1 x 85 x 106], input [1 x 8 x 170 x 212] at /opt/conda/conda-bld/pytorch_1524582441669/work/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu:24
In cases where the program runs the last two dimensions seem to be consistent
torch.Size([1, 8, 75, 94]) torch.Size([1, 75, 94])
The text was updated successfully, but these errors were encountered: