You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder, how to prepare the images in order to use the pre-trained model.
I guess that the images should be aligned and resized to 112x112, right? Your code also supports 224x224, but on what images has the pre-trained net been trained? I guess on 112x112, because when I try 224x224, I get a 'RuntimeError: size mismatch'.
How was alignment and resizing exactly done. You refer to the val data in https://github.com/ZhaoJ9014/face.evoLVe.PyTorch. However, the cropped images of CFP (Version "Align_112x112") linked in the Data Zoo there are cropped differently than the code of that repository does. Can you give one or two examples of aligned and resized images used to train the pre-trained model?
The script utils.py contains the following function for preprocessing (after alignment and resizing): ccrop = transforms.Compose([ de_preprocess, transforms.ToPILImage(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ])
with def de_preprocess(tensor): return tensor * 0.5 + 0.5
It seems, that the validation data had already been normalized (mean, std), and here, you revert that to an image tensor, but then apply the same normalization again. So, I think, when performing inference on a single (aligned and resized) image, I should just use input_tensor = preprocess(croppedImg).unsqueeze(0) using this transform: preprocess = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ])
Is this assumption right?
You help is much appreciated.
The text was updated successfully, but these errors were encountered:
I wonder, how to prepare the images in order to use the pre-trained model.
I guess that the images should be aligned and resized to 112x112, right? Your code also supports 224x224, but on what images has the pre-trained net been trained? I guess on 112x112, because when I try 224x224, I get a 'RuntimeError: size mismatch'.
How was alignment and resizing exactly done. You refer to the val data in https://github.com/ZhaoJ9014/face.evoLVe.PyTorch. However, the cropped images of CFP (Version "Align_112x112") linked in the Data Zoo there are cropped differently than the code of that repository does. Can you give one or two examples of aligned and resized images used to train the pre-trained model?
The script utils.py contains the following function for preprocessing (after alignment and resizing):
ccrop = transforms.Compose([ de_preprocess, transforms.ToPILImage(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ])
with
def de_preprocess(tensor): return tensor * 0.5 + 0.5
It seems, that the validation data had already been normalized (mean, std), and here, you revert that to an image tensor, but then apply the same normalization again. So, I think, when performing inference on a single (aligned and resized) image, I should just use
input_tensor = preprocess(croppedImg).unsqueeze(0)
using this transform:preprocess = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ])
Is this assumption right?
You help is much appreciated.
The text was updated successfully, but these errors were encountered: