You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I really liked your new Curricular face paper especially the idea of automatic curriculum learning. I tried replicating your results on IJB-C dataset using the IR_101 pretrained model that you have supplied. Firstly, I have aligned the faces using the landmarks provided in the ArcFace repository. I then normalized these faces the same way you did for lfw and other datasets in the evaluate.py code. I used the same ArcFace evaluation code. However, the results I am getting are much lower than what is reported in the paper. Here are my results:
Hi Huang,
I really liked your new Curricular face paper especially the idea of automatic curriculum learning. I tried replicating your results on IJB-C dataset using the IR_101 pretrained model that you have supplied. Firstly, I have aligned the faces using the landmarks provided in the ArcFace repository. I then normalized these faces the same way you did for lfw and other datasets in the evaluate.py code. I used the same ArcFace evaluation code. However, the results I am getting are much lower than what is reported in the paper. Here are my results:
TAR @ FAR: [1:1 verification protocol]
1e-6 --> 6.34
1e-5 --> 15.76
1e-4 --> 55.29
1e-3 --> 75.83
1e-2 --> 84.34
1e-1 --> 91.17
These numbers seem buggy to me. Is the code you used to evaluate IJB-C the same as evaluate.py?
The text was updated successfully, but these errors were encountered: