Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relation between 3D joint locations and camera space. #5

Closed
erezposner opened this issue Aug 13, 2019 · 6 comments
Closed

Relation between 3D joint locations and camera space. #5

erezposner opened this issue Aug 13, 2019 · 6 comments

Comments

@erezposner
Copy link

Hi, I would like to understand the relation between the MANO 3d joint and vertices 3d location with the camera space.

Let's assume that I capture an RGB image using a calibrated camera and use "Learning joint reconstruction of hands and manipulated objects" to estimate MANO 3d joints. Are the 3d joints are in normalized camera space?

Would the MANO estimation is oriented towards camera?
Thank you

@hassony2
Copy link
Owner

Hi @erezposner

MANO 3d joint and vertices are predicted aligned with the camera view, but root centered.
This means that:

  • if you keep the two first coordinates of the predicted 3d joints, if you assume an orthographic camera model, there is an additional scaling and translation missing to go back to the image space, so yes the Mano estimation is oriented towards the camera
  • After the paper was submitted, I did some additional experiments to also predict this scale and translation but for hands only (no objects) if you run python webcam_demo.py --resume release_models/hands_only/checkpoint.pth.tar you will see that the predicted joints are reprojected onto the image.

I hope this answers your question !

Best,

Yana

@erezposner
Copy link
Author

Are the scaling and translation estimated within the net? or using closed form solution?
could you kindly direct me this part in the code?
thank you

@erezposner
Copy link
Author

Thanks! got it.
I have another question, more in the context of MANO layer.
How can one generate multiple perspectives of the same MANO generated hand? In the sense of beta,thetas.

If I understand correctly, for the same hand viewed from two different perspectives I would have two different theta vectors, Is that correct? If this is the case, How can I determine the theta vector of a hand viewed from another perspective?
thank you

@hassony2
Copy link
Owner

This is correct, the first 3 parameters of theta are the global axis-angle rotation vector, so this is the part that needs to be modified to generate the vector from a different perspective.

@erezposner
Copy link
Author

Got it, Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants