Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use spherefacce code test your model #2

Closed
deepage opened this issue Mar 21, 2018 · 2 comments
Closed

Use spherefacce code test your model #2

deepage opened this issue Mar 21, 2018 · 2 comments

Comments

@deepage
Copy link

deepage commented Mar 21, 2018

Thank your share the nice job.
I have download your res-27 model,and test it on LFW,but only get 99.48%,maybe something wrong when I align photos.
I use MTCNN detect all photos,and align it with this
coord5point = [ 46.29460144, 59.69630051;
81.53179932, 59.50139999;
64.02519989, 79.73660278;
49.54930115, 100.3655014 ;
78.72990417, 100.20410156];
then crop to be 128x128.
Am I right? Or problem is here?
At the end,I use evaluation code in sphereface,and can get accuracy 99.48%(with image flip),without image flip it can get 99.43%.
Can you give me some advise about how should I do?
BTW,I also try your python code norml2_sim,but can get the same result.
Thanks very much!

@huangyangyu
Copy link
Owner

I guess the alignment method may be different. How do you align face through the key points? We share the alignment method in util.py file. You can try our test script, which using aligned faces.

@thuhuwei
Copy link
Collaborator

thuhuwei commented Mar 21, 2018

The alignment is introduced in https://github.com/AlfredXiangWu/face_verification_experiment , we use the same method (but with RGB images).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants