You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By the way, I noticed that the default batch size in the program is set to 1. But when I try to increase the batch size value, I got
RuntimeError: stack expects each tensor to be equal size, but got [3, 512, 512] at entry 0 and [3, 342, 512] at entry 1
It seems that the code kept the aspect ratio of the original image when doing resize so that images in the same batch are of different sizes. I wonder that was your model trained with the batch size of 1? Thank you!
@ntuLC Sorry for the late reply. Please take a look at the discussion of #21 (comment) and see if it solves this one. As @13331112522, you might be missing the --high-res flag, as the pretrained weights are high-resolution ones. Also, please take into account that the dependencies of this project are very old and values may vary with latest releases of PyTorch/SRU.
Regarding the question about the batch size, we had to train our model with a batch size of 1, due to memory constraints and also due to the dynamic convolution computations which do not allow batch dimension computation.
Hi,
I just read your paper and appreciate your well-organized code very much. But I cannot produce results using the pre-trained model.
I am not quite sure where the problem is. Did I use the wrong command?
The text was updated successfully, but these errors were encountered: