Skip to content
A Python library to evaluate mean Average Precision(mAP) for object detection. Provides the same output as PASCAL VOC's matlab code.
Python Shell
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
examples ready for release Aug 12, 2019
experiments update readme Aug 17, 2019
vidt ready for release Aug 12, 2019
vmetrics ready for release Aug 12, 2019
.gitignore Initial commit; test against Cartucho/mAP Jan 29, 2019
prep_data.sh update readme Aug 17, 2019
readme.md Update the Arxiv link in readme Aug 26, 2019
setup.py ready for release Aug 12, 2019

readme.md

VMetrics

This repo provides the evaluation codes used in our ICCV 2019 paper A Delay Metric for Video Object Detection: What Average Precision Fails to Tell, including:

  • Mean Average Precision (mAP)
  • Average Delay (AD)
  • A redesigned NAB metric for the video object detection problem.

Prepare the data

Download the groundtruth annotations and the sample detector outputs by running the following command:

$ bash prep_data.sh

The groundtruth annotations of VIDT are stored in KITTI-format due to its simplicity and io-efficiency.

We provide the outputs of the following methods. The github repos that generate those outputs are also listed.

Run evaluation

All the evaluation scripts are under ./experiments folder. For instance, to measure the mAP and AD of FGFA, run command:

python experiments/eval_map_ad.py examples/rfcn_fgfa_7 data/ILSVRC2015_KITTI_FORMAT

Evaluate your own detector.

For every video sequence, output a file as <sequence_name>.txt. Each line in the file should be one single object in <frame_id> <class_id> <confidence> <xmin> <ymin> <xmax> <ymax> format.

Acknowledgement

This pure Python-based mAP evaluation code is refactored from Cartucho/mAP. It has been tested against the original matlab version.

You can’t perform that action at this time.