Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to generate the thetas and betas parameter? #5

Closed
xinsirBUPT2016 opened this issue Apr 12, 2021 · 18 comments
Closed

How to generate the thetas and betas parameter? #5

xinsirBUPT2016 opened this issue Apr 12, 2021 · 18 comments

Comments

@xinsirBUPT2016
Copy link

Hi, I have download the human3.6m dataset, however, i couldn't find a way to produce the thetas and betas parameter. The only clue we know comes from the SPIN issue, which use the sensors data and Mosh methods to generate theta and beta parameter, but the sensor data of human3.6m and code of Mosh is not public, we are very anxious to generate the pose parameter both in training and testing dataset, can you please answer it?

@Jeff-sjtu
Copy link
Owner

Hi, we provide the annotation files that contain the theta and beta parameters. These parameters are generated by Mosh. You can directly use our JSON files.

@xinsirBUPT2016
Copy link
Author

sure, I have see your json data, the theta and beta params are extracted every 5 frames in a video, but in some cases, we need successive frames to do some expriments and recover some result in human3.6m, so would mind provide some details about the methods and the sensor data? Is it possible to apply to the human3.6 official to get the sensor data?

@Jeff-sjtu
Copy link
Owner

Do you have the moshed data? I generate the JSON annotations from the moshed results.

@xinsirBUPT2016
Copy link
Author

what is the format of moshed data, We get access to the human3.6 dataset,and the pose annotations is a obj file,is it the moshed data? Besides,how do you generate the annotations, use the mosh methods with the sensor data, or use the SMPL models?

@Jeff-sjtu
Copy link
Owner

The moshed data is download from the open-source code of the previous method a long time ago. I upload the JSON files before downsampling on Baidu Drive. I hope it would help.

@xinsirBUPT2016
Copy link
Author

Could you please tell me the extract code in a private way that you are convenient?

@Jeff-sjtu
Copy link
Owner

Hi, the code is ey7w.

@xinsirBUPT2016
Copy link
Author

Hi,is it the total annotation data for human3.6m, I have download the files, and load the json, however it is still discontinuous, every 5 frames select 1. Anyway, thank you for you coco-operation, it is indeed very difficult to find the total pose data for human3.6m. There are a lot of tricks to generate the data and not owned by us, You are a very enthusiastic person, for you kindness reply. Thank you!

@Jeff-sjtu
Copy link
Owner

Hi, I just check the moshed data and the code. I find that the moshed data provide continuous annotations. But the results (e.g. global orientation) maybe not aligned with the camera. I upload it on Baidu Drive (code k77z) with the code that used to downsample the results. I hope it can help you!

@xinsirBUPT2016
Copy link
Author

Thank you very much! It helps us a lot and we can continue our research due to this. You can contact me when you want to find a job or intership, if you want to go to megvii, sensetime, bytedance, baidu and so on. My email is xinsir@bupt.cn

@Jeff-sjtu
Copy link
Owner

Thanks for your sincerity!

@JinShiyin
Copy link

Hi, I have downloaded the moshed data and the code, the raw_p['poses'] contains pose params in camera0,1,2,3, however I can't find the order of camera 0,1,2,3 saved in raw_p['poses'], and how many images in each camera, can you please answer it? Thank you.

@Jeff-sjtu
Copy link
Owner

Hi @JinShiyin, actually I do not use the raw pkl file and only load the poses from the _aligned.pkl. It seems the poses in camx_aligned.pkl are downsampled by the rate of 5. If you only need the downsampled data, you can directly use the camx_aligned.pkl.

@Frank-Dz
Copy link

Hi @JinShiyin, actually I do not use the raw pkl file and only load the poses from the _aligned.pkl. It seems the poses in camx_aligned.pkl are downsampled by the rate of 5. If you only need the downsampled data, you can directly use the camx_aligned.pkl.

Hi~ I found the downsampling codes need

import metadata as md 

But there is no this file.
Could you share the code here?
Thank you very much!

@Frank-Dz
Copy link

Hi @JinShiyin, actually I do not use the raw pkl file and only load the poses from the _aligned.pkl. It seems the poses in camx_aligned.pkl are downsampled by the rate of 5. If you only need the downsampled data, you can directly use the camx_aligned.pkl.

Hi~ I found the downsampling codes need

import metadata as md 

But there is no this file. Could you share the code here? Thank you very much!

or just for a double check, you are using the same code in https://github.com/anibali/h36m-fetch?

@Jeff-sjtu
Copy link
Owner

Hi @Frank-Dz,

Yes, we use the same code in h36m-fetch.

@NewCoderQ
Copy link

Hi @JinShiyin, actually I do not use the raw pkl file and only load the poses from the _aligned.pkl. It seems the poses in camx_aligned.pkl are downsampled by the rate of 5. If you only need the downsampled data, you can directly use the camx_aligned.pkl.

Hi @Jeff-sjtu , what is the different processing between *.pkl and the *_camx_aligned.pkl, or could you tell me where I can find the process scripts? I want to get the continuous smpl mesh.

@moezet01
Copy link

Hi @Jeff-sjtu.
Congratulations to the authors for this great research work!

I am trying to understand the generation process of 'thetas' and 'betas' parameters of Human3.6m.
I cannot download the Mosh code from Baidu drive. Is there any other way to see that?
And, does that Mosh method come from that paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants