Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you crop the image which includes the hand after palm is detected? #26

Closed
pharrellyhy opened this issue Aug 20, 2019 · 2 comments
Assignees
Labels
type:research Model specific questions

Comments

@pharrellyhy
Copy link

pharrellyhy commented Aug 20, 2019

Hi,

Nice work on hand tracking and gesture recognition! I do have a question, like I said in the title. After palm is detected, how do you guarantee the image being cropped contains all the keypoints of the hand? If I notice correct, this gif (red box) misses some fingertips. Thanks!

@fanzhanggoogle
Copy link

Hi,
Thanks for the questions.

  1. For cropping the correct image region, we first rotate the image to an angle that the vector connecting wrist and MCP is vertical. Then we extend the palm square on each direction with fairly large scale based on the metrics from our experiment. Plus our model is trained with large augmentation to capture the variance of hand location within the cropped region. You can find the implementation detail in the Mediapipe hand tracking graph.
  2. You are absolutely correct that the detail of the hand is not predicted well. This is mostly because the model doesn't handle motion blur well enough and of course the model itself is not perfect yet.
    We are keep working on improving the model quality in various aspects. Your feedback is very appreciated!

@pharrellyhy
Copy link
Author

It's very helpful. Looking forward to your further work. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:research Model specific questions
Projects
None yet
Development

No branches or pull requests

3 participants