You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
great coding job. When I read this paper, I always have a question bout testing on the image whose size is different from the training data. Suppose we train on 224x224x3 images and the patch size is 16x16x3, which means the sequence length of image would be 196. However, if I want to test the model on 220x220x3 image (size is not dividable by 16), how can we handle this? Does it mean that we need to randomly crop into the size that is dividable by 16, e.g. 208x208x3? If we do so, we might miss some information of the image, e.g. the whole image only contains a face of a bear. CNN does not have this problem.
The text was updated successfully, but these errors were encountered:
@BaohaoLiao This paper purposely chose the crudest method to make a point. I think you will do well just to use a standard convolution with strides and padding to create the initial set of features. You can view the patch-based method of this paper basically as convolutions where the stride is equal to the kernel size
Hi,
great coding job. When I read this paper, I always have a question bout testing on the image whose size is different from the training data. Suppose we train on 224x224x3 images and the patch size is 16x16x3, which means the sequence length of image would be 196. However, if I want to test the model on 220x220x3 image (size is not dividable by 16), how can we handle this? Does it mean that we need to randomly crop into the size that is dividable by 16, e.g. 208x208x3? If we do so, we might miss some information of the image, e.g. the whole image only contains a face of a bear. CNN does not have this problem.
The text was updated successfully, but these errors were encountered: