-
Notifications
You must be signed in to change notification settings - Fork 758
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I prepare my input data? I have a folder of pngs, what do I do with them? #138
Comments
The code is quite confusing, as it contains various different methods for alignment and normalization. After a lot of experiments, I found out that the pre-trained network IR-152 works best if the images are preprocessed as follows:
That is essentially the procedure implemented in extract_feature_v1.py and extract_feature_v2.py except that these scripts assume that the images have already been cropped to the squared face bounding box. You can use the following code:
|
Thanks a lot for going through the effort to write this answer. I've decided to use the Azure Face API rather than rolling my own, but your answer might be useful for someone else. I can't edit your answer, can you perhaps add formatting to the Python code? |
@JoMe2704 I have question - does |
No, The transform |
Hi~ I don’t know what is the use of extracting these features. I hope to get the output label of the input image. What should I do? |
Hi @JoMe2704 @changxinC , D:/face.evoLVe.PyTorch/data/dataV1/ It would help a lot of people if you can guide how to get the correct dataset format for training. Help would be much appreciated, thank you. |
While the network has been trained to perform classification of the subjects (persons) in the training sets, this doesn't help you, if you look at images of persons that aren't in the training set. In order to use the network for images of any person, you use the features (embeddings) of the finale layer. These can be used to measure the similarity of the faces. Precisely, the euclidean distance between the embeddings of two face images is a measure for the dissimilarity. Depending on the specific variance of the images (face pose, facial expression, illumination, sharpness, ageing), you can set a threshold for the distance to decide if the images depict the same person. |
Sorry, I have no idea. I used my own images and scripts. And I didn't perform any training yet. |
hello, When I use the above code you mentioned, I set the inpiclist to the path of my picture: /home/face.evoLVe/max/150.jpg, but when I running this code,it shows the error that 'box' is not defined and backbone is not defined, how can I solve it? Thank you! |
This was just the relevant code for face alignment, not a complete script. You need to initialize and load the model before.
inpiclist should be a list of file paths, not a single file path.
There was an error in my code, right in the line before you get the error: So, here is the complete code:
|
I tried putting the images directly in the
data/
directory as instructed on the README.md page, but this just leads to the following error:Someone in this issue suggested to use
prepare_data.py
from the following repository (which btw also is in this repo under backup/):https://github.com/TreB1eN/InsightFace_Pytorch#323-prepare-dataset--for-training
But that seems to be unable to work with just
.png
s either, it seems to be looking for some sort of.rec
file:Any advice? Thanks for your attention.
The text was updated successfully, but these errors were encountered: