-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate with MegaFace Challenge dataset #275
Comments
here is some sample image from MegaFace. Its included Facescrub, FGnet for known subjects, and Megaface for unknown subjects (distractor) |
I have tested the lasted model 20170512 with MegaFace challgene 1 (FaceScrub). This is result compared with others method (both on identification problem and verification on 1M distractor problem) . The blue line (david) show this repo's result. Pls check this @davidsandberg |
Hi @se7oluti0n, |
Hi, @se7oluti0n, I used a model trained with casia-webface and get the rank-1 identification rate: In your second graph, the blue line starts at a low identification rate(~0.92). It is probably because you didn't check the mtcnn-detected regions with the groundtruth for the facescrub dataset. |
@davidsandberg @ugtony I'm a bit confused because my line is not the same shape with others. May be there are some mistakes in my steps. With distractor = 10, Rank 1 accuracy should be high as 0.98.. The detail steps:
@ugtony When run the test with aligned facescrub images on Megaface site, the result is not good, may be because they aligned different way compare to mtcnn. So I do the alignment with mtcnn, but I did not check with the groundtruth.
|
Thanks! |
@se7oluti0n Thanks for this, very clear explanation ! |
@se7oluti0n, Therefore, if multiple faces are detected in align_dataset_mtcnn.py, you should choose the one overlapping most with the groundtruth bbox. The bbox info is listed in the .json files, the same file you should use when face detection fails. Thanks for explaining how the figures are plot. @davidsandberg In short, in my opinion, the left portion of se7oluti0n's identification curve is lower than others' because some images in probe are wrong. The right portion of the curve doesn't decline so much because the probe/gallery sets are well aligned but the distractor set isn't. |
@ugtony Thanks for the clear explanation. I wonder which classifier (center loss or triplet loss) you used when retrained with casia-webface? Is that the cleaned version casia-maxpy-clean? @davidsandberg The results looked very competitive with others state of the art method e.g ntech lab Findface (http://fusion.kinja.com/this-face-recognition-company-is-causing-havoc-in-russi-1793856482)
|
I used center loss to train my classifier. |
Hi @davidsandberg, I used the code shared by @se7oluti0n to plot my result (a model trained with facenet_train_classifier.py on casia-webface) on megaface challenge 1. My result is plotted with red color, please take a look. The performance is competitive when the false positivie rate > 0.001 and the distractors number < 10,000, while become worse than nearby curves when false positive rate is low and number of distractors is high. Any idea why this happens? Maybe it's just because my training dataset size is smaller than others. |
@ugtony @davidsandberg
This is the result using aligned facescrub downloaded from MegaFace challenge, but the distractors are not aligned, use raw image. I will update the result after align distrator later |
It's good to know that TP increases to ~0.75 when FP=10^-6. It is pretty close to the "CenterLoss" performance (76.51%) reported in the paper "A Light CNN for Deep Face Representation with Noisy Labels". I guess the identification rate would drop to somewhere around 0.65 after the distractor images are aligned. |
@ugtony you are right. I guess without the distractor's alignment, the distractor images is not really a facical image, so it is easier to recognize probes from distractors. latest results here. This results is very similar to the results in original center loss paper A Discriminative Feature Learning Approach for Deep Face Recognition (~65% for identification and 76% for verification) |
@ugtony @se7oluti0n Hi, do you know how to define a new score model when I evaluate my model on Megaface? I want to use cosine similarity to measure the distance instead of euclidean distance. |
Hi, Is it possible to evaluate your model on top-k accuracy on Megaface challenge other than top-1? e.g. k =3,5, 10. Thanks |
@se7oluti0n Could you please renew the link to your evaluation kit. The original link seems broken. |
Hi, |
hello! may i have your megaface dataset? i have applied for it at http://megaface.cs.washington.edu/dataset/download.html,but there is no reply! Thanks! |
Hello, I have downloaded MegaFace challenge data for evaluating face recognition, both Identification and Verification problem.
I think testing result on LFW is not enough because 2 reasons:
MegaFace challenge provide large dataset for testing face recognition with 1M distractors, testing on FaceScrub dataset. Here is some result: http://megaface.cs.washington.edu/results/facescrubresults.html
I'm also write code for extract features and convert to MegaFace format.
But I worry that the input images from Megaface is aligned in different way with Facenet code.
Could you help me check this code @davidsandberg
Here is the link:
https://gist.github.com/se7oluti0n/8ff161505721b6c4ab25ccfe7996fd1a
The text was updated successfully, but these errors were encountered: