-
Notifications
You must be signed in to change notification settings - Fork 544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Verifying the release caffemodel in the result of Megaface #20
Comments
I get 73.6888%. This number is still too high. According to the paper, a 64-layer ResNet could get 72.729%. A 20-layer network should not do better than a 64-layer network. I guess this is because of the alignment. Megaface is a strange dataset that if you do worse on alignment, you can get better result. I have tried my best to align the dataset, but I can't get accurate keypoints on lots of images. I directly crop a region as the aligned face on each of these images. This may cause the distractors too weak to compete with the probes, which would make the performance better...... |
Hi, |
@happynear |
@kalyo-zjl , |
@happynear , |
@happynear |
You can normalize the embedding feature first. when X, Y are normalized. d(X,Y) = 2 - 2 cos(X,Y). They just have the same effect |
@kalyo-zjl em...... you mean that once I save the normalized embeddings to '.bin' file, I can directly |
@chichan01 @happynear @kalyo-zjl Hi all, I know d(X,Y) = 2 - 2 cos(X,Y). But how to define a new score model when I evaluate my model on Megaface? |
Hi, |
Hi,
I have just evaluated your 20-layer CNN architecture caffemodel in Megaface.
The results are below:
Rank-1 Identification Accuracy with 1 Million Distractors, Set1 (FaceScrub )
77.6892% (75.766% is their 68 layers published in the official homepage of Megaface )
Rank-1 Identification Accuracy with 1 Million Distractors, Set1, (testing age-invariant recognition at scale, FGNet)
23.5023% (47.555% is their 68 layers published in the official homepage of Megaface )
Is it right? did any other guys evaluate it as well?
The text was updated successfully, but these errors were encountered: