New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to lift the 2D joint positions to 3D? #4
Comments
Hi ZJYCP, The 3D positions are created using the code provided at https://github.com/gopeith/SignLanguageProcessing under 3DposeEstimator. The "trg" file contains skeleton data of each frame, with a space separating frames. Each frame contains 150 joint values and a subsequent counter value, all separated by a space. If your data contains 150 joints per frame, please ensure that trg_size is set to 150 in the config file. |
thanks for your reply.
Did I miss something or done something wrong? Data processing really takes me a lot of time, ( •́ω•̩̥̀ ) waiting for you reply. |
Hi, When I originally processed the data, I divided the joints outputs of the Inverse Kinematics by 3, in order to get them all below 1. Looking at your example, it seems the hands are correct but large, which is explained by the larger scale you use. Apologies, this was an error on my part, I'll update the README file to explain this. |
@ZJYCP , please share the pretrained model, if you have stored it somewhere. Request you to please help it is really really urgent. |
hi, I have extracted the 2D positions using OpenPose, but I have no idea about lifting it to 3D as you mentioned in the paper. Could you please provide the code or give me some tips? Besides, I wonder how does the "trg" file arrange 150 positions data.
thanks :-)
The text was updated successfully, but these errors were encountered: