Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python API : Incorrect AP score with preditction = ground truth #507

Open
lulud41 opened this issue Apr 12, 2021 · 2 comments
Open

Python API : Incorrect AP score with preditction = ground truth #507

lulud41 opened this issue Apr 12, 2021 · 2 comments

Comments

@lulud41
Copy link

lulud41 commented Apr 12, 2021

Hi,
I'm testing the Python API. I wanted to check if I could get 1.0 AP when the prediction is strictly equal to the ground truth, wich seems obvious. However, I get 0.73 AP@0.5 and other weard stuff. I'm using a custom annotation file

Am I doing something wrong ? Thanks in advance,

Here is my code :


from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import json
import numpy as np

d = json.load(open("annotations.json"))

# construct a prediction array from ground truth, with (id_image, x1, y1, w, h, confidence, cls ) format

pred = np.zeros((7,7))
for idx, ann in enumerate(d["annotations"]):
    pred[idx,1:5] = ann["bbox"]
    pred[idx,5] = 1

print(f"prediction {pred}\n\n ground truth : {d}")

outputs
prediction
[[ 0. 61. 123. 191. 453. 1. 0.]
[ 0. 165. 95. 187. 494. 1. 0.]
[ 0. 236. 104. 195. 493. 1. 0.]
[ 0. 452. 110. 169. 508. 1. 0.]
[ 0. 520. 95. 163. 381. 1. 0.]
[ 0. 546. 39. 202. 594. 1. 0.]
[ 0. 661. 132. 183. 510. 1. 0.]]

ground truth : {
'type': 'instances',
'annotations':
[{'id': 0, 'image_id': 0, 'category_id': 0, 'bbox': [61, 123, 191, 453], 'iscrowd': 0, 'area': 86523},
{'id': 1, 'image_id': 0, 'category_id': 0, 'bbox': [165, 95, 187, 494], 'iscrowd': 0, 'area': 92378},
{'id': 2, 'image_id': 0, 'category_id': 0, 'bbox': [236, 104, 195, 493], 'iscrowd': 0, 'area': 96135},
{'id': 3, 'image_id': 0, 'category_id': 0, 'bbox': [452, 110, 169, 508], 'iscrowd': 0, 'area': 85852},
{'id': 4, 'image_id': 0, 'category_id': 0, 'bbox': [520, 95, 163, 381], 'iscrowd': 0, 'area': 62103},
{'id': 5, 'image_id': 0, 'category_id': 0, 'bbox': [546, 39, 202, 594], 'iscrowd': 0, 'area': 119988},
{'id': 6, 'image_id': 0, 'category_id': 0, 'bbox': [661, 132, 183, 510], 'iscrowd': 0, 'area': 93330}],
'images': [{'id': 0, 'width': 1024, 'height': 683}],
'categories': [{'id': 0, 'name': 'person', 'supercategory': 'person'}]}


coco_gt = COCO("annotations.json")
cooc_det = coco_gt.loadRes(pred)
coco_eval = COCOeval(coco_gt, cooc_det, "bbox")

coco_eval.evaluate()
coco_eval.accumulate()
coco_eval.summarize()

outputs :
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
Converting ndarray to lists...
(7, 7)
0/7
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=0.05s).
Accumulating evaluation results...
DONE (t=0.01s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.730
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.730
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.730
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.730
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.857
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.857
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.857
"""

@soheilazangeneh
Copy link

I faced the same issue!

@soheilazangeneh
Copy link

I figured out the problem. When the annotation ids start from 0, it does not return the right metric values!! Just start from 1 and it is fixed for me.
#332

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants