Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions #1

Closed
wwdok opened this issue Mar 15, 2022 · 3 comments
Closed

Some questions #1

wwdok opened this issue Mar 15, 2022 · 3 comments

Comments

@wwdok
Copy link

wwdok commented Mar 15, 2022

Hi, @amitt1236, your implementation of gaze estimation is the most accurate I have seen so far ! After studied the code, I have some questions:

  1. I run the code on my computer, and output the result to a video below:
    https://streamja.com/49vwG
    As shown in the video, my thicker red line jitters a lot, so do you apply some smooth strategy to the gaze point ?

  2. Your code have provided some standard face 3d modol points like model_points and Eye_ball_center_left, how do you get these coordinates ? I am thinking if I change the coordinate system like rotate along z axis:
    image
    (Actually this is the first time I use blender)
    How can get the new coordiantes of these points ?

@amitt1236
Copy link
Owner

  1. I didn't use any smoothening, But if you combine the data from the two eyes you will get better accuracy, you can look
    at my vision_physiology repo for a simple example.

  2. The nose is the reference point (0,0,0), and the Z-axis is the depth. Try looking at my points for refrence. And Cool thinking using blender.
    The eyeball center uses eye-physiology, the distance from the cornea to the eyeball Center, there is some great
    medicine papers about it. The distnce is practicly the same in most humans.
    you can try and use diffrent sets of points with different scaling.

Good luck

@wwdok
Copy link
Author

wwdok commented Mar 16, 2022

  1. Thanks for your sharing !
  2. In the code, we use following coordinates(x,y,z) of standard human face's 6 landmarks:
model_points = np.array([
        (0.0, 0.0, 0.0),  # Nose tip
        (0, -63.6, -12.5),  # Chin
        (-43.3, 32.7, -26),  # Left eye, left corner
        (43.3, 32.7, -26),  # Right eye, right corner
        (-28.9, -28.9, -24.1),  # Left Mouth corner
        (28.9, -28.9, -24.1)  # Right mouth corner
    ])

Take the chin for example, its proportion between y and z is 63.6/12.5=5.088
But I measure the chin coordinate in blender, its proportion is 8.9402/3.3221=2.691
image
Then I verify the proportion in 2d plane with square, it seems its proportion is closer to 2.691
image
The model in blender I used is
canonical_face_mesh.zip
I am thinking whether if these coordinates are measured with another human face 3d model , I am curious where those coordinates in model_points come from ?

@amitt1236
Copy link
Owner

The proportion to a single point can change if you use a different model. The important part is that all the proportions change in the same way. Try your points, start with only head pose calculation and continue from there

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants