You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to use my customized dataset with image size 256x256 and 12 classes. I'm not using your VOC.py and dataloader codes. I am using my own dataloaders as follows but I get this error after some iterations:
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1]) T (1) | Ls 2.51 Lu 0.00 Lw 0.00 PW 0.00 m1 0.04 m2 0.04|: 1%|▍ | 20/1894 [00:11<18:32, 1.68it/s]
Hello,
I tried to use my customized dataset with image size 256x256 and 12 classes. I'm not using your VOC.py and dataloader codes. I am using my own dataloaders as follows but I get this error after some iterations:
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1]) T (1) | Ls 2.51 Lu 0.00 Lw 0.00 PW 0.00 m1 0.04 m2 0.04|: 1%|▍ | 20/1894 [00:11<18:32, 1.68it/s]
Dataloaders in train.py file:
` num_classes = 12
The text was updated successfully, but these errors were encountered: