Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to lift the 2D joint positions to 3D? #4

Closed
ZJYCP opened this issue Dec 28, 2020 · 4 comments
Closed

how to lift the 2D joint positions to 3D? #4

ZJYCP opened this issue Dec 28, 2020 · 4 comments

Comments

@ZJYCP
Copy link

ZJYCP commented Dec 28, 2020

hi, I have extracted the 2D positions using OpenPose, but I have no idea about lifting it to 3D as you mentioned in the paper. Could you please provide the code or give me some tips? Besides, I wonder how does the "trg" file arrange 150 positions data.
thanks :-)

@BenSaunders27
Copy link
Owner

Hi ZJYCP,

The 3D positions are created using the code provided at https://github.com/gopeith/SignLanguageProcessing under 3DposeEstimator.

The "trg" file contains skeleton data of each frame, with a space separating frames. Each frame contains 150 joint values and a subsequent counter value, all separated by a space. If your data contains 150 joints per frame, please ensure that trg_size is set to 150 in the config file.

@ZJYCP
Copy link
Author

ZJYCP commented Jan 20, 2021

thanks for your reply.
I have used '3DposeEstimator' to create 3D positions. But the results are a little different from my expectation, which are showed below.

截图录屏_选择区域_20210120153842
lots of number are larger than 1, which is quiet different with the tmp data you provide. And in this way, the generated video also makes no sense.

截图录屏_deepin-movie_20210120155618
Here is what I did:

  1. using openpose python api to estimate the keypoints. Here I didn't set param 'keypoint_scale' in openpose.
  2. then I followed the pipeline-demo in https://github.com/gopeith/SignLanguageProcessing , firstly openpose data to h5. and then run pipeline_demo_02_filtr.py, which is same with the demo.py file in 3DposeEstimator . And I got the result as showed above.

Did I miss something or done something wrong? Data processing really takes me a lot of time, ( •́ω•̩̥̀ ) waiting for you reply.
thanks:-)

@BenSaunders27
Copy link
Owner

Hi,

When I originally processed the data, I divided the joints outputs of the Inverse Kinematics by 3, in order to get them all below 1. Looking at your example, it seems the hands are correct but large, which is explained by the larger scale you use.

Apologies, this was an error on my part, I'll update the README file to explain this.

@ZJYCP ZJYCP closed this as completed Mar 2, 2021
@hacker009-sudo
Copy link

@ZJYCP , please share the pretrained model, if you have stored it somewhere. Request you to please help it is really really urgent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants