Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eval.py does not process all 5k images #22

Closed
mvcaro opened this issue Apr 27, 2019 · 8 comments
Closed

eval.py does not process all 5k images #22

mvcaro opened this issue Apr 27, 2019 · 8 comments

Comments

@mvcaro
Copy link

mvcaro commented Apr 27, 2019

When I run:

python eval.py --trained_model=weights/yolact_base_54_800000.pth --dataset=coco2017_dataset

It only evaluates 4952 images. Any ideas on why it does't go though the 5000 images in ./data/coco/images/ ?

The image folder has 5000 images and the annotations_val2017.json file has annotations for those images.

What do I need to change so that it evaluates the complete set of images? (5k)

@dbolya
Copy link
Owner

dbolya commented Apr 30, 2019

COCOEval doesn't evaluate images without ground truth annotations:
https://github.com/cocodataset/cocoapi/blob/aca78bcd6b4345d25405a64fdba1120dfa5da1ab/PythonAPI/pycocotools/cocoeval.py#L247
so we don't either.

I'm pretty sure those 48 images don't have annotations, because I turned off our check for no annotations and it went up to 5000. I then submitted the results for that new 5000 image set to COCOEval and got the same mAP as when submitting the 4952 image set. To turn that check off for yourself, grab my latest commit and add the 'has_gt': False parameter to coco2017_dataset toward the top of data/config.py. Note that this will stop you from being able to use my eval implementation so you'll have to use COCOEval, so run these commands (note that coco2017_dataset is the default so you don't have to specify):

python eval.py --trained_model=weights/yolact_base_54_800000.pth
python run_coco_eval.py

You should see 5000 images for eval.py, but run_coco_eval.py should give you the same 29.9 mask mAP.

@dbolya dbolya closed this as completed Apr 30, 2019
@mvcaro
Copy link
Author

mvcaro commented Apr 30, 2019

Hi @dbolya thanks for your response.
I did what you recommended and added the 'has_gt': False in data/config.py to your last commit but I get this error:

python eval.py --trained_model=weights/yolact_base_54_800000.pth
Config not specified. Parsed yolact_base_config from the file name.

loading annotations into memory...
Done (t=0.83s)
creating index...
index created!
Loading model... Done.

Traceback (most recent call last):
  File "eval.py", line 937, in <module>
    evaluate(net, dataset)
  File "eval.py", line 790, in evaluate
    prep_metrics(ap_data, preds, img, gt, gt_masks, h, w, num_crowd, dataset.ids[image_idx], detections)
  File "eval.py", line 336, in prep_metrics
    gt_boxes = torch.Tensor(gt[:, :4])
TypeError: 'NoneType' object is not subscriptable

@dbolya
Copy link
Owner

dbolya commented Apr 30, 2019

Ahhh my bad, I wrote the wrong command. Here's the correct one:

python eval.py --trained_model=weights/yolact_base_54_800000.pth --output_coco_json
python run_coco_eval.py

So sorry about that!

@mvcaro
Copy link
Author

mvcaro commented Apr 30, 2019

Thanks a lot for your help! It works now

@stuafu
Copy link

stuafu commented May 17, 2019

i run "python eval.py --trained_model=weights/yolact_base_xxxxxx.pth " of mydataset and then
python run_coco_eval.py "

but got error " if g['ignore'] or (g['area']<aRng[0] or g['area']>aRng[1]):
KeyError: 'area'" when excuting run_coco_eval.py . Moreever g['_ignore'] is not exist .
it seems that the format of box_json and mask_json file are no supported by cocoeval.py ?

@dbolya
Copy link
Owner

dbolya commented May 17, 2019

'area' is supposed to be in the gt annotation. First check if you have those in your dataset's annotation. Then if you're running this on your own dataset, the proper set of commands are:

python eval.py --trained_model=weights/yolact_base_xxxxxx.pth --output_coco_json --dataset=<your_dataset>
python run_coco_eval.py --gt_ann_file=<path/to/your/dataset/validation/annotation/json>

@stuafu
Copy link

stuafu commented May 17, 2019

thx , i know the commands for personal dataset. I wonder if "ignore" or" _ignore" is supposed to be in the gt annotation. i've checked the "instances_val2017.json" file , but find neither "ignore" nor" _ignore"

@dbolya
Copy link
Owner

dbolya commented May 17, 2019

"ignore" is something that the cocoeval script adds to the dictionary later. It's to keep track of which annotations it's already assigned to a detection. You don't need to include it in your annotations file.

From the error message though it seems like one of your annotations doesn't have an 'area' field, so make sure they all have that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants