Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inverting the new_joint_vecs? #4

Open
andrewnc opened this issue Nov 22, 2023 · 3 comments
Open

Inverting the new_joint_vecs? #4

andrewnc opened this issue Nov 22, 2023 · 3 comments

Comments

@andrewnc
Copy link

This is great work! I have a quick question. After I have processed the motions as described, I have three folders. joint new_joints new_joint_vecs. When training a generative model here, as described in your paper, you would use the full 623 dimensional vector in new_joint_vecs

If you wanted to extract the rotations in new_joint_vecs to apply to an fbx - how would you do this?

I mean to say, it seems the joint order has changed, there is no Jaw joint, and directly removing the continuous 6d rotations and transforming them into quaternions doesn't yield the desired effect.

I'm curious if you have insights here

@shunlinlu
Copy link
Collaborator

Hi, @andrewnc

We actually do not use rotation in our representation from our experiments in the appendix. And we followed Humanml3D format to visualize the motion using position. If you are seeking to visualize the motion in software like Blender, you may try this. And I also visualize the position with rotation after FK in my script, and it seems good.

Shunlin

@andrewnc
Copy link
Author

Thank you for the response. To make sure I understand correctly, I would take the 623 dimensional data, extract the XYZ positions for each joint, and run something like https://github.com/IDEA-Research/HumanTOMATO/blob/main/src/tomato_represenation/common/skeleton.py#L103 to recover the rotations for each joint?

@bob35buaa
Copy link

Hi, @andrewnc

We actually do not use rotation in our representation from our experiments in the appendix. And we followed Humanml3D format to visualize the motion using position. If you are seeking to visualize the motion in software like Blender, you may try this. And I also visualize the position with rotation after FK in my script, and it seems good.

Shunlin
I want to convert tomato representation to SMPL-X format for visualization as shown in Figure 4 and Figure 8 in your paper. Can you provide your scripts for this? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants