-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relation between 3D joint locations and camera space. #5
Comments
Hi @erezposner MANO 3d joint and vertices are predicted aligned with the camera view, but root centered.
I hope this answers your question ! Best, Yana |
Are the scaling and translation estimated within the net? or using closed form solution? |
Thanks! got it. If I understand correctly, for the same hand viewed from two different perspectives I would have two different theta vectors, Is that correct? If this is the case, How can I determine the theta vector of a hand viewed from another perspective? |
This is correct, the first 3 parameters of theta are the global axis-angle rotation vector, so this is the part that needs to be modified to generate the vector from a different perspective. |
Got it, Thank you |
Hi, I would like to understand the relation between the MANO 3d joint and vertices 3d location with the camera space.
Let's assume that I capture an RGB image using a calibrated camera and use "Learning joint reconstruction of hands and manipulated objects" to estimate MANO 3d joints. Are the 3d joints are in normalized camera space?
Would the MANO estimation is oriented towards camera?
Thank you
The text was updated successfully, but these errors were encountered: