Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Format of the meta (target) files #2

Closed
marcobarnobi opened this issue Nov 12, 2019 · 1 comment
Closed

Format of the meta (target) files #2

marcobarnobi opened this issue Nov 12, 2019 · 1 comment

Comments

@marcobarnobi
Copy link

Hello Yana,

First of all thank you for the great work!

I would like to use the Obman dataset to train my own deep learning model for a different application and to do so I am trying to understand the format of the lables (meta data).
I have unpickled a meta file (.pkl) and I am trying to understand the dictionary resulting from it.

For example, coords3D is associated to 21 3D points, I think that these are keypoints of the object bounding box but I am not sure, and which keypoint are they? I have the same doubt for other voices of the dictionary.

Another example, in the case of the voice hand_pose. What do the 45 numbers associated to this voice exactly correspond to?

Is this information available somewhere?
If not, could you be so nice to explain the format of each target (voice in the meta file)?

Let me know! :)

@hassony2
Copy link
Owner

Hi !

Thank you for your interest in our work :)
I will try to make a more detailed documentation of the entries soon !
In the meantime, you can get some information about the meta file by looking at how it is created during rendering based on a variety of information.
You can take a look here and here to see how this information is generated.

some quick pointers:
grasp_pose contains the PCA mano components that encode the hand pose.
hand_pose should contain the global axis-angle rotation in the 3 first values and the axis-angle values for the joints in the next values, but I will have to check again to be 100% sure.

Note however that since a global rigid transform is applied to apply the person's global rotation when the hand is "attached" to the body, the first 3 axis-angle values do not match the observed global rotation of the hand (the hand is first rotated using the 3 axis-angle values, and then the rigid transform is additionally applied

Best !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants