You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "main.py", line 117, in
predict, reconstruct_img = net(img_batch, label_batch, train=True)
File "/home/user/pytorch_python3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/media/user/DATA/New_CODE/Working/CapsNet_pytorch/lib/network.py", line 45, in forward
output = self.conv1(x)
File "/home/user/pytorch_python3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/user/pytorch_python3/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 254, in forward
self.padding, self.dilation, self.groups)
File "/home/user/pytorch_python3/lib/python3.5/site-packages/torch/nn/functional.py", line 52, in conv2d
return f(input, weight, bias)
RuntimeError: Need input.size[1] == 1 but got 3 instead.
The text was updated successfully, but these errors were encountered:
I want to try to implement your code on RGB images where I got the following errors. Can you help me to sort out the problem please?
transform = transforms.Compose([
transforms.Resize((28,28)),
transforms.Grayscale(num_output_channels=3),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
File "main.py", line 117, in
predict, reconstruct_img = net(img_batch, label_batch, train=True)
File "/home/user/pytorch_python3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/media/user/DATA/New_CODE/Working/CapsNet_pytorch/lib/network.py", line 45, in forward
output = self.conv1(x)
File "/home/user/pytorch_python3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/user/pytorch_python3/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 254, in forward
self.padding, self.dilation, self.groups)
File "/home/user/pytorch_python3/lib/python3.5/site-packages/torch/nn/functional.py", line 52, in conv2d
return f(input, weight, bias)
RuntimeError: Need input.size[1] == 1 but got 3 instead.
The text was updated successfully, but these errors were encountered: