Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiry about why to add an additional linear layer to handle joints mismatch on FreiHand #10

Closed
viridityzhu opened this issue Mar 6, 2023 · 5 comments

Comments

@viridityzhu
Copy link

Hi, thanks for this impressive work.

In your paper, you train your model on FreiHand, but encounter a joint definition mismatch. So you add an additional linear layer that maps from your joint to dataset annotation.

we build upon I2L-MeshNet [Moon and Lee 2020] and train another parameter regression branch with 3D joint loss and photometric loss using the FreiHand [Zimmermann et al. 2019] dataset. Note that FreiHand offers ground truth annotation with 21 3D keypoints, while our model is defined with 25 anatomical joints. Following [Li et al. 2021], we add an additional linear layer that maps from our joint to dataset annotation to account for the mismatch.

You further explain that:

our model does not outperform [Moon and Lee 2020] due to the fundamental difference of joint definition

However, I find the joint definitions are quite similar, which can be seen from the following pictures, showing your NIMBLE joints and Freihand joints respectively.

image image

I think the difference is just the four black points on the hand in your model. Therefore, I am wondering why not simply match the remaining 21 joints to the annotated joints of FreiHand. Will it provide more accurate mapping compared with a linear layer?

Thanks!

@reyuwei
Copy link
Owner

reyuwei commented Mar 9, 2023

Hi!
Actually, Our joint rig is quite different from MANO's joint rig. Here is a visual compare. Red is ours, and green is MANO's. Not only do we have four more joints, the position are different for most joints, particularly those at the base of the fingers. This difference is the main reason why we add the linear layers.

(The figure is created with NIMBLE in rest pose, convert to MANO topology with our method, and regress MANO joint with their joint regressor. )

image

@viridityzhu
Copy link
Author

Thanks for your kind reply! That exactly solves my question.

@delaprada
Copy link

Hi! Actually, Our joint rig is quite different from MANO's joint rig. Here is a visual compare. Red is ours, and green is MANO's. Not only do we have four more joints, the position are different for most joints, particularly those at the base of the fingers. This difference is the main reason why we add the linear layers.

(The figure is created with NIMBLE in rest pose, convert to MANO topology with our method, and regress MANO joint with their joint regressor. )

image

Hello!
May I ask how to map the joint output by nimble layer to freihand dataset annotation? Is the additional linear layer included in the NIMBLELayer.py?
I am trying to use NIMBLE in my project. Thank you very much.

@reyuwei
Copy link
Owner

reyuwei commented Mar 20, 2023

I think you could either use this function in NIMBLELayer.py to generate corresponding MANO output, then compute the joint positions using MANO's joint regressor, which matches FreiHand annotation.

Or you could define a linear layer with trainable parameters which takes nimble joint as input and outputs mano joints, and learn the parameters in an end-to-end way (if you are working on a learning project).

Hi! Actually, Our joint rig is quite different from MANO's joint rig. Here is a visual compare. Red is ours, and green is MANO's. Not only do we have four more joints, the position are different for most joints, particularly those at the base of the fingers. This difference is the main reason why we add the linear layers.
(The figure is created with NIMBLE in rest pose, convert to MANO topology with our method, and regress MANO joint with their joint regressor. )
image

Hello! May I ask how to map the joint output by nimble layer to freihand dataset annotation? Is the additional linear layer included in the NIMBLELayer.py? I am trying to use NIMBLE in my project. Thank you very much.

@delaprada
Copy link

I think you could either use this function in NIMBLELayer.py to generate corresponding MANO output, then compute the joint positions using MANO's joint regressor, which matches FreiHand annotation.

Or you could define a linear layer with trainable parameters which takes nimble joint as input and outputs mano joints, and learn the parameters in an end-to-end way (if you are working on a learning project).

Hi! Actually, Our joint rig is quite different from MANO's joint rig. Here is a visual compare. Red is ours, and green is MANO's. Not only do we have four more joints, the position are different for most joints, particularly those at the base of the fingers. This difference is the main reason why we add the linear layers.
(The figure is created with NIMBLE in rest pose, convert to MANO topology with our method, and regress MANO joint with their joint regressor. )
image

Hello! May I ask how to map the joint output by nimble layer to freihand dataset annotation? Is the additional linear layer included in the NIMBLELayer.py? I am trying to use NIMBLE in my project. Thank you very much.

Thanks a lot! I will have a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants