Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verifying the release caffemodel in the result of Megaface #20

Open
chichan01 opened this issue Aug 18, 2017 · 10 comments
Open

Verifying the release caffemodel in the result of Megaface #20

chichan01 opened this issue Aug 18, 2017 · 10 comments

Comments

@chichan01
Copy link

Hi,
I have just evaluated your 20-layer CNN architecture caffemodel in Megaface.
The results are below:
Rank-1 Identification Accuracy with 1 Million Distractors, Set1 (FaceScrub )
77.6892% (75.766% is their 68 layers published in the official homepage of Megaface )
Rank-1 Identification Accuracy with 1 Million Distractors, Set1, (testing age-invariant recognition at scale, FGNet)
23.5023% (47.555% is their 68 layers published in the official homepage of Megaface )

Is it right? did any other guys evaluate it as well?

@happynear
Copy link

happynear commented Aug 23, 2017

I get 73.6888%. This number is still too high. According to the paper, a 64-layer ResNet could get 72.729%. A 20-layer network should not do better than a 64-layer network.

I guess this is because of the alignment. Megaface is a strange dataset that if you do worse on alignment, you can get better result. I have tried my best to align the dataset, but I can't get accurate keypoints on lots of images. I directly crop a region as the aligned face on each of these images. This may cause the distractors too weak to compete with the probes, which would make the performance better......

@chichan01
Copy link
Author

Hi,
Well, to be fair and the issue of practical, I only used the provided (groundtruth) landmark provided in json files to normalise the faces and I think others will do the similar procedure. The reason is that there is impossible to check 1M faces results using the other facial landmark algorithm as some images have more than one face and it is also very difficult to manual annotate the landmark when the algorithm fails.
Another issue I point out is the result of age-invariant, my result is significantly worse than the paper they released. I think the architecture of 20 layers they released may not the one they use for the megaface test as one of the authors in this forum has mentioned that.

@kalyo-zjl
Copy link

@happynear
hi, do you mean that you got 73.6888% by using the released 20 layer model? Could you please show the exact preprocessing code about MTCNN and alignment used for MegaFace and FaceScrub dataset? I can't get comparable results in MegaFace with yours'. Maybe there is something wrong in my preprocessing code.
BTW, what do you do if the face can't be detected using the algorithm?
Thanks in advance!

@happynear
Copy link

happynear commented Aug 25, 2017

@kalyo-zjl ,
I have uploaded the codes to my repository https://github.com/happynear/FaceVerification/tree/master/dataset/Megaface . The detection and alignment logic is also described in the ReadMe file.
You may not use Matlab and my implementation of MTCNN, but you may refer to my procedure of detection and alignment.

@kalyo-zjl
Copy link

@happynear ,
Thank you, I will check it. Yes, I will refer to your procedure of detection and alignment, and use the python version of MTCNN and alignment, in which the result should be similar.

@Jianf-Wang
Copy link

@happynear
Hi, I have a question. How to evaluate sphereface on Megaface ?
The sphereface uses cosine similarity to measure the distance.

@kalyo-zjl
Copy link

You can normalize the embedding feature first. when X, Y are normalized. d(X,Y) = 2 - 2 cos(X,Y). They just have the same effect

@Jianf-Wang
Copy link

@kalyo-zjl em...... you mean that once I save the normalized embeddings to '.bin' file, I can directly
run the code "run_experiment.py" to get correct results?

@yao5461
Copy link

yao5461 commented Nov 12, 2017

@chichan01 @happynear @kalyo-zjl Hi all, I know d(X,Y) = 2 - 2 cos(X,Y). But how to define a new score model when I evaluate my model on Megaface?
run_experiment.py -s 1000000 -p ../templatelists/facescrub_features_list.json -m ???
Thanks!

@ghost
Copy link

ghost commented Sep 20, 2019

Hi,
Do we need to normalize the input images of Facescrub and Megaface before feeding them to the model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants