-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use my own dataset? #25
Comments
What you should is just add another data loading code. |
|
I preprocessed all datasets to MSCOCO format. You can refer to that site and annotation of MSCOCO dataset. |
Hello, I have one more question. |
h36m is from here and coco is from smpl joint regressor |
|
|
Hello!
|
|
Hello, I still have some questions. Training can get 7 losses, which are: loss['joint_fit'], loss['joint_orig'], loss['mesh_fit'], loss['mesh_joint_orig'], loss['mesh_joint_fit'], loss['mesh_normal '], loss['mesh_edge']. I want to calculate the total loss.
I think,
Thank you very much for your reply! |
L_pose^posenet == loss['joint_fit'] + loss['joint_orig'] |
The rotation loss is defined on 6D, https://github.com/mks0601/I2L-MeshNet_RELEASE/blob/master/common/nets/module.py#L125 |
Q. If we try to work on our own dataset, how do we get the 6D representation? Can you provide the code for converting? Q. Is the 6D representation https://arxiv.org/abs/1812.07035? |
I cannot understand your question.. Did you use only 9 images for the training? why? How did you use only those images? |
First of all, using only 9 images with the provided learning schedule (13 epoch) will absolutely make the model not converged. |
@zqq-judy do you remember how can you get these files: bbox_root_xx_output.json, xx_train.json because I need them for my dataset |
Hello!
I want to use other datasets to achieve some of my own tasks (not tasks related to hands and bodies). How to use my own dataset? What preparations need to be done? Can you give specific guidance?
Thank you in advance for your reply.
The text was updated successfully, but these errors were encountered: