Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce MAP of 1.0 #16

Open
Artcs1 opened this issue Jul 20, 2024 · 0 comments
Open

Cannot reproduce MAP of 1.0 #16

Artcs1 opened this issue Jul 20, 2024 · 0 comments

Comments

@Artcs1
Copy link

Artcs1 commented Jul 20, 2024

Hello, authors. Great Work!!

I did a code to predict the GT to a predicted file, hoping to obtain a 1.00 MAP. But that is not the case. Any hint about what could be happening?

MI CODE:

from d_cube import D3
import os
import cv2
import json
import numpy as np

from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval


IMG_ROOT = '/data/home/stufs1/jmurrugarral/datasets/D3/d3_images'
PKL_ANNO_PATH = '/data/home/stufs1/jmurrugarral/datasets/D3/d3_pkl'

d3 = D3(IMG_ROOT, PKL_ANNO_PATH)
all_img_ids = d3.get_img_ids()  # get the image ids in the dataset
all_img_info = d3.load_imgs(all_img_ids)  # load images by passing a list of some image ids

results = []

for img_id in all_img_ids:

    img_path = all_img_info[img_id]["file_name"]  # obtain one image path so you can load it and inference
    img_file = os.path.join(IMG_ROOT, img_path)

    group_ids = d3.get_group_ids(img_ids=[img_id])  # get the group ids by passing anno ids, image ids, etc.
    sent_ids = d3.get_sent_ids(group_ids=group_ids)  # get the sentence ids by passing image ids, group ids, etc.
    sent_list = d3.load_sents(sent_ids=sent_ids)
    ref_list = [sent['raw_sent'] for sent in sent_list]  # list[str]

    annIds = d3.get_anno_ids(img_ids=img_id)
    annotations = d3.load_annos(annIds)

    for annotation in annotations:
        for sent in annotation['sent_id']:
            detection = {}
            detection['category_id'] = sent
            detection['bbox'] = list(annotation['bbox'][0])
            detection['image_id'] = img_id
            detection['score'] =  1.0 
            results.append(detection)

with open('d3_full_detections.json', 'w') as f:
    json.dump(results, f)


gt_path = '/data/home/stufs1/jmurrugarral/datasets/D3/d3_json/d3_full_annotations.json'
pred_path = 'd3_full_detections.json'

coco = COCO(gt_path)  # `gt_path` is the ground-truth JSON path (different JSON for FULL, PRES or ABS settings in our paper)
d3_model = coco.loadRes(pred_path)  # `pred_path` is the prediction JSON file 
cocoEval = COCOeval(coco, d3_model, "bbox")
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()

OUTPUT:

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.941
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.941
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.941
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.961
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.958
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.936
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.737
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.999
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.991
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 1.000

I read something about that problem with the original COCO eval code. cocodataset/cocoapi#507

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant