You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When training,i set the crop size to 512×512 by default,but i have some problem with the pooling layer. _lib/backbone/PSMNet/submodule.py_ _output_branch1 = self.branch1(output_skip)_ _RuntimeError: Given input size: (128x32x32). Calculated output size: (128x0x0). Output size is too small_
You set a larger size when training,right?Or there is another problem.
Thank you
The text was updated successfully, but these errors were encountered:
Hi, the crop size should be set considering the aspect_ratio flag. In fact, crops are computed directly on the high-resolution images. Thus, if you want images at 512x512 as input of the network during training, you should set (for example) crop_width=2048 and crop_height=2048, aspect ratio=0.25. You can find some training scripts in the script folder.
When training,i set the crop size to 512×512 by default,but i have some problem with the pooling layer.
_lib/backbone/PSMNet/submodule.py_
_output_branch1 = self.branch1(output_skip)_
_RuntimeError: Given input size: (128x32x32). Calculated output size: (128x0x0). Output size is too small_
You set a larger size when training,right?Or there is another problem.
Thank you
The text was updated successfully, but these errors were encountered: