Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluation results #16

Open
ouceduxzk opened this issue May 4, 2018 · 4 comments
Open

evaluation results #16

ouceduxzk opened this issue May 4, 2018 · 4 comments

Comments

@ouceduxzk
Copy link

ouceduxzk commented May 4, 2018

First of all, thanks for sharing the work. I quickly run a test of AP with following results, do you know why it is too low?

python3 models/COCO.res50.256x192.CPN/mptest.py -d 0-1 -r 350
loading annotations into memory...
Done (t=2.09s)
creating index...
index created!
loading the precalcuated json files
Loading and preparing results...
4581
4581
DONE (t=2.98s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type keypoints
there are 40504 unique images
DONE (t=14.41s).
Accumulating evaluation results...
DONE (t=0.53s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.093
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.116
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.102
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.089
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.099
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.097
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.117
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.104
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.092
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.103
AP50
ap50 is 0.141489
ap is 0.099431

I added the AP calculation and saved the json file already

@chenyilun95
Copy link
Owner

Your result is similar to #11 . I think there is something wrong while testing. I found your testing images are
40504 unique images. We test the results on the COCO minival dataset which contains 5000 images and so as the provided detection boxes. You might get the wrong human detection in your dataset.

@ouceduxzk
Copy link
Author

Thanks for your quick reply, you are right that my val json files is not the same, I am using the person_keypoints_val2014.json , can you provide those json files, in coco dataset official website, they are not existing anymore

@chenyilun95
Copy link
Owner

COCO 2014 minival json and its detection result json is provided.

@ouceduxzk
Copy link
Author

ouceduxzk commented May 7, 2018

Thanks, now it looks normal

DONE (t=0.37s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.697
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.883
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.770
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.662
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.761
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.764
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.927
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.823
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.715
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.830

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants