Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about the evaluation results! #7

Open
FightStone opened this issue Dec 6, 2019 · 6 comments
Open

A question about the evaluation results! #7

FightStone opened this issue Dec 6, 2019 · 6 comments

Comments

@FightStone
Copy link

This is my evaluation result :

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.808
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.955
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.870
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.850
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.961
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.899
Average Precision (AP) @[ IoU=0.50:0.95 | area= easy | maxDets= 20 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.790
Average Precision (AP) @[ IoU=0.50:0.95 | area= hard | maxDets= 20 ] = -1.000

Why is there a -1 value? My prediction result format is the same as the prediction json format you provided. Looking forward for your response, thank you!

@Z1Wu
Copy link

Z1Wu commented Jan 4, 2020

I have the same problem, even when I run the demo.py provided by author.
It looks like when "area == easy | area == hard " the result will be strange.

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.660
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.842
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.715
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.727
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.895
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.775
Average Precision (AP) @[ IoU=0.50:0.95 | area= easy | maxDets= 20 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.411
Average Precision (AP) @[ IoU=0.50:0.95 | area= hard | maxDets= 20 ] = -1.000

@albertter
Copy link

same problem.

@lyrgwlr
Copy link

lyrgwlr commented Jun 10, 2020

same problem, have you guys solved this? @albertter @Z1Wu @FightStone

@Aaron20127
Copy link

evaluation code has bug.

@XyK0907
Copy link

XyK0907 commented Oct 14, 2020

I have the same problem. Is there any solution?

@nikhilchh
Copy link

self.areaRngLbl = ['all', 'medium', 'large']

And the code tried to get evaluation result when area_range is easy, medium, hard.

It is supposed to fail.
But how come in their demo.py you get a valid numeric value and not -1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants