Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different results on IJB-C #21

Open
KamalM8 opened this issue Jul 7, 2020 · 0 comments
Open

Different results on IJB-C #21

KamalM8 opened this issue Jul 7, 2020 · 0 comments

Comments

@KamalM8
Copy link

KamalM8 commented Jul 7, 2020

Hi Huang,

I really liked your new Curricular face paper especially the idea of automatic curriculum learning. I tried replicating your results on IJB-C dataset using the IR_101 pretrained model that you have supplied. Firstly, I have aligned the faces using the landmarks provided in the ArcFace repository. I then normalized these faces the same way you did for lfw and other datasets in the evaluate.py code. I used the same ArcFace evaluation code. However, the results I am getting are much lower than what is reported in the paper. Here are my results:

TAR @ FAR: [1:1 verification protocol]
1e-6 --> 6.34
1e-5 --> 15.76
1e-4 --> 55.29
1e-3 --> 75.83
1e-2 --> 84.34
1e-1 --> 91.17

These numbers seem buggy to me. Is the code you used to evaluate IJB-C the same as evaluate.py?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant