-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to generate the thetas and betas parameter? #5
Comments
Hi, we provide the annotation files that contain the |
sure, I have see your json data, the theta and beta params are extracted every 5 frames in a video, but in some cases, we need successive frames to do some expriments and recover some result in human3.6m, so would mind provide some details about the methods and the sensor data? Is it possible to apply to the human3.6 official to get the sensor data? |
Do you have the moshed data? I generate the JSON annotations from the moshed results. |
what is the format of moshed data, We get access to the human3.6 dataset,and the pose annotations is a obj file,is it the moshed data? Besides,how do you generate the annotations, use the mosh methods with the sensor data, or use the SMPL models? |
The moshed data is download from the open-source code of the previous method a long time ago. I upload the JSON files before downsampling on Baidu Drive. I hope it would help. |
Could you please tell me the extract code in a private way that you are convenient? |
Hi, the code is |
Hi,is it the total annotation data for human3.6m, I have download the files, and load the json, however it is still discontinuous, every 5 frames select 1. Anyway, thank you for you coco-operation, it is indeed very difficult to find the total pose data for human3.6m. There are a lot of tricks to generate the data and not owned by us, You are a very enthusiastic person, for you kindness reply. Thank you! |
Hi, I just check the moshed data and the code. I find that the moshed data provide continuous annotations. But the results (e.g. global orientation) maybe not aligned with the camera. I upload it on Baidu Drive (code |
Thank you very much! It helps us a lot and we can continue our research due to this. You can contact me when you want to find a job or intership, if you want to go to megvii, sensetime, bytedance, baidu and so on. My email is xinsir@bupt.cn |
Thanks for your sincerity! |
Hi, I have downloaded the moshed data and the code, the raw_p['poses'] contains pose params in camera0,1,2,3, however I can't find the order of camera 0,1,2,3 saved in raw_p['poses'], and how many images in each camera, can you please answer it? Thank you. |
Hi @JinShiyin, actually I do not use the raw pkl file and only load the poses from the |
Hi~ I found the downsampling codes need
But there is no this file. |
or just for a double check, you are using the same code in https://github.com/anibali/h36m-fetch? |
Hi @Frank-Dz, Yes, we use the same code in h36m-fetch. |
Hi @Jeff-sjtu , what is the different processing between *.pkl and the *_camx_aligned.pkl, or could you tell me where I can find the process scripts? I want to get the continuous smpl mesh. |
Hi @Jeff-sjtu. I am trying to understand the generation process of 'thetas' and 'betas' parameters of Human3.6m. |
Hi, I have download the human3.6m dataset, however, i couldn't find a way to produce the thetas and betas parameter. The only clue we know comes from the SPIN issue, which use the sensors data and Mosh methods to generate theta and beta parameter, but the sensor data of human3.6m and code of Mosh is not public, we are very anxious to generate the pose parameter both in training and testing dataset, can you please answer it?
The text was updated successfully, but these errors were encountered: