Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

face_template values #14

Closed
glennois opened this issue Feb 18, 2022 · 5 comments
Closed

face_template values #14

glennois opened this issue Feb 18, 2022 · 5 comments

Comments

@glennois
Copy link

How do I calculate these numbers?

class FaceRestoreHelper(object): ...
            self.face_template = np.array([[192.98138, 239.94708], [318.90277, 240.1936], [256.63416, 314.01935],
                                           [201.26117, 371.41043], [313.08905, 371.15118]])

I want to calculate them for LFW and other datasets. I'm very new so I would appreciate if you can show how to calculate. 🍵

Thank you so much for your amazing library!

@woctezuma
Copy link
Contributor

woctezuma commented Feb 18, 2022

These look like coordinates of five points.

See the comment above the line which you quoted:

standard 5 landmarks for FFHQ faces with 512 x 512

if self.template_3points:
self.face_template = np.array([[192, 240], [319, 240], [257, 371]])
else:
# standard 5 landmarks for FFHQ faces with 512 x 512
self.face_template = np.array([[192.98138, 239.94708], [318.90277, 240.1936], [256.63416, 314.01935],
[201.26117, 371.41043], [313.08905, 371.15118]])

See the 5 colored dots on this image:

Landmarks

https://github.com/biubug6/Pytorch_Retinaface

@glennois
Copy link
Author

Thanks Woctezuma! I see it says it's for FFHQ, can I use these with LFW?

I'm still learning... thank you.

@glennois
Copy link
Author

That comment got me worried 🤣

@woctezuma
Copy link
Contributor

woctezuma commented Feb 18, 2022

For FFHQ, the dlib library is used, so that face alignment is based on 68 landmarks. See: http://dlib.net/face_landmark_detection_ex.cpp.html

This example program shows how to find frontal human faces in an image and
estimate their pose. The pose takes the form of 68 landmarks. These are
points on the face such as the corners of the mouth, along the eyebrows, on
the eyes, and so forth.

The face detector we use is made using the classic Histogram of Oriented
Gradients (HOG) feature combined with a linear classifier, an image pyramid,
and sliding window detection scheme. The pose estimator was created by
using dlib's implementation of the paper:
One Millisecond Face Alignment with an Ensemble of Regression Trees by
Vahid Kazemi and Josephine Sullivan, CVPR 2014
and was trained on the iBUG 300-W face landmark dataset (see
https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/):
C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic.
300 faces In-the-wild challenge: Database and results.
Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016.
You can get the trained model file from:
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2.

So the use of 5 landmarks is not specific to FFHQ at all. I think I saw it first in RetinaFace:

@glennois
Copy link
Author

Perfect! Thank you so much for answering all my questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants