You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I would like to know where the parameters in this transform come from? I don't get the same values if I take the mean and std over the images 'train+unlabeled' split.
I tried the method used to get these values for ImageNet in torchvision described here on various subsets of train+unlabeled and the std gets closer but the mean is still off.
I am resizing first to 128 then center cropping to 96 as you appear to do for validation images.
I will note that I am using TensorFlow and perhaps the resizing method might be having an effect. However I also tried using PyTorch and still could not reproduce these values.
Thanks
The text was updated successfully, but these errors were encountered:
Hi, I would like to know where the parameters in this transform come from? I don't get the same values if I take the mean and std over the images 'train+unlabeled' split.
https://github.com/untitled-ai/self_supervised/blob/6d14ca0402ecc13feda9b3a9fdc056fd1ac24473/utils.py#L127-L129
I tried the method used to get these values for ImageNet in
torchvision
described here on various subsets of train+unlabeled and the std gets closer but the mean is still off.I am resizing first to 128 then center cropping to 96 as you appear to do for validation images.
I will note that I am using TensorFlow and perhaps the resizing method might be having an effect. However I also tried using PyTorch and still could not reproduce these values.
Thanks
The text was updated successfully, but these errors were encountered: