Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to convert the outputs of mpii model to MPII evaluation format? #172

Closed
yw155 opened this issue Apr 30, 2018 · 6 comments
Closed

How to convert the outputs of mpii model to MPII evaluation format? #172

yw155 opened this issue Apr 30, 2018 · 6 comments

Comments

@yw155
Copy link

yw155 commented Apr 30, 2018

Hi @ZheC, now I print the predictions and GT on the same image to check if the results are correct. The saved results are shown below. The first image is the outputs of cmu-mpii model. In the second image, blue points are estimated joints while yellow points are the GT. I see the main differences focus on two joints - pelvis and thorax. Before I use the script of evalMPII.m to perform evaluation and can only attain 50.4 and much lower than the results of your paper (79.1). So I would like to ask you that do you convert the 15 joints of MPII model to the 16 joints of MPII annotation format. If so, how you convert them? Thanks.
vis_595_dt
vis_595_gt

@yw155
Copy link
Author

yw155 commented May 1, 2018

Hi @ZheC, I noticed that the joints 'thorax' and 'pelvis' are not evaluated on MPII dataset. So it does not need to obtain the prediction for both joints. Furthermore, I checked the prediction and groundtruth and did not see an obvious wrong. But I can only get an accuracy of 50.4 using the default model and parameters. Is there some other reasons that affect the accuracy? Thanks.

@ZheC
Copy link
Owner

ZheC commented May 1, 2018

So possible reasons are (a) you are evaluating on 343 images only, you want to use the ground-truth data for those 343 images as well. (b) the index of joints are not matching in the prediction and ground-truth, for example, comparing the ankle prediction to the wrist GT position.

@yw155
Copy link
Author

yw155 commented May 1, 2018

Hi @ZheC, for (a), I am not very clear. Should I use the whole groundtruth in the evaluation or not? For (b), I checked the script of 'evalMPII.m' and didn't find the case of mismatching. So is it convenient for you to provide the complete evaluation codes on MPII val set for reference? I would appreciate your help. Thanks.

@DavHoffmann
Copy link

Hi @ZheC ,
I also have trouble reproducing the reported results with the code provided here. Some more explanations on how to get the mAP reported in the paper would be highly appreciated. @yw155 did you find out why the mAP was so low?
Thanks for your help!

@yw155
Copy link
Author

yw155 commented May 18, 2018

Hi @DavHoffmann, I find the problem is the bounding box of each group is not matching with the groups which are defined in the MPII evaluation file of 'evaluateAP.m'. So I add some codes in the file of 'evalMPII.m', like pred(i).annorect = pred(i).annorect(rectidxs_multi_test{j}). You can check these parts in your file if they are the same as 'evaluateAP.m'. Thanks.

@yw155 yw155 closed this as completed May 21, 2018
@DavHoffmann
Copy link

Thank you, that solved my problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants