Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to convert the coordinates from mixamo to normal 2d pose coordinates? #29

Closed
annopackage opened this issue Dec 26, 2020 · 15 comments
Closed

Comments

@annopackage
Copy link

Hi, can tell me the pipeline for getting 2d pose coordinates from files from mixamo? I run your code (

def get_joint3d_positions(joint_names, frame_idx):
), but found some problems with bpy 2.9.

@ChrisWu1997
Copy link
Owner

We used blender 2.79/2.80, so maybe you encountered a version problem.

The pipeline for getting 2d pose coordinates is simply to 1) extract global 3d coordinates from the character(the function you referenced) 2) project 3d coordinates into 2d by orthogonal projection (

def trans_motion3d(motion3d, local3d=None, unit=128):
)

@annopackage
Copy link
Author

Thanks for your quick reply. My question is why we should extract the global 3D coordinates like the function I referenced before. I am confused with it. Could you give me more information?

@ChrisWu1997
Copy link
Owner

I see. I guess your confusion is mainly about this line, right?

global_location = armature.matrix_world * posebones[name].matrix * Vector((0, 0, 0))

The rationale behind is that the posebone structure in blender only records each bone's local transformation against its parent bone. So in order to extract the global 3D coordinates, we need to do some calculation by the local transformation matrix(posebones[name].matrix) and the global transformation of the armature itself (armature.matrix_world). See the answer here.

@annopackage
Copy link
Author

OK, why do we need *vector((0,0,0))?

@ChrisWu1997
Copy link
Owner

The last item is local displacement of each bone, which can be queried by posebones[name].location. Here each bone is tightly connected to its parent, so this item is always (0, 0, 0).

@annopackage
Copy link
Author

Due to the version incompatibility of 2.9 and 2.8 etc, the code with
"armature.matrix_world * posebones[name].matrix * Vector((0, 0, 0))"
cause error" Element-wise multiplication: not supported between 'Matrix' and 'Vector' types", if with code like "armature.matrix_world * posebones[name].matrix @ Vector((0, 0, 0))", then the ouptut becomes zero, which is weird.

@ChrisWu1997
Copy link
Owner

What about using armature.matrix_world @ posebones[name].matrix @ posebones[name].location? Does this work?

@annopackage
Copy link
Author

Yes, it works. I have to check if the output is correct later.

@annopackage
Copy link
Author

Hi, I test the code, but find some 2d coordinates are negative. By the way, could explain the code below and why do we need subtracting centers and adding velocities?

def trans_motion2d(motion2d):
    # subtract centers to local coordinates
    centers = motion2d[8, :, :]
    motion_proj = motion2d - centers

    # adding velocity
    velocity = np.c_[np.zeros((2, 1)), centers[:, 1:] - centers[:, :-1]].reshape(1, 2, -1)
    motion_proj = np.r_[motion_proj[:8], motion_proj[9:], velocity]

    return motion_proj

@ChrisWu1997
Copy link
Owner

Subtracting the center point is to normalize the data so that poses in each frame are now local. In other words, subtracting the center point gives the local motion representation. This is beneficial because same poses performed in different global position will now have the same representation (i.e. their arrays will become identical). The negative 2d coordinates come from here.

However, subtracting the center point will lost the information for the global position. So we additionally include the global velocity to retain the global information. The final representation becomes local motion + global velocity.

@viggyr
Copy link

viggyr commented May 4, 2021

Hi. How do we rescale the pose coordinates from this negative range to the range 0, image_size? I need to pass the value to motion2video function in order to generate skeleton images. Thanks

@ChrisWu1997
Copy link
Owner

Hi. How do we rescale the pose coordinates from this negative range to the range 0, image_size? I need to pass the value to motion2video function in order to generate skeleton images. Thanks

The function trans_motion_inv should be able to convert normalized motion back to original scale.

@viggyr
Copy link

viggyr commented May 4, 2021

Just to clarify, am I supposed to pass the initial 3D points that I get from fbx (which in range roughly -1 to 1) to the trans_motion_inv function? Cz I never explicitly normalize the motion anywhere. I was trying to pass the 3d joints generated through fbx to the trans_motion function as suggested above in this thread

@ChrisWu1997
Copy link
Owner

Just to clarify, am I supposed to pass the initial 3D points that I get from fbx (which in range roughly -1 to 1) to the trans_motion_inv function?

No, I mean that you can use trans_motoin_inv to restore the motion if trans_motion3d is applied before. So you can apply trans_motion3d and trans_motoin_inv consecutively to the initial 3D points that I get from fbx.

I was trying to pass the 3d joints generated through fbx to the trans_motion function as suggested above in this thread

I think another simple solution for you is to remove this line in the trans_motion3d function:

motion_proj = trans_motion2d(motion_proj)
because trans_motion2d would do some normalizations. Then trans_motion3d only does a 3d to 2d projection, and you can apply it to the initial 3D points that I get from fbx. But note that the result may not lie in the center of the image.

@viggyr
Copy link

viggyr commented May 6, 2021

Gotcha!

That worked perfectly. Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants