MSCOCO caption evaluation codes for use with arbitrary image and text data
OpenEdge ABL
Switch branches/tags
Nothing to show
Clone or download
Pull request Compare This branch is 9 commits ahead, 8 commits behind tylin:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
PyDataFormat
annotations
data
pycocoevalcap
results
.gitignore
README.md
evalscripts.py
license.txt
params.json

README.md

Microsoft COCO Caption Evaluation

Evaluation codes for MS COCO caption generation from (https://github.com/tylin/coco-caption). Unlike the COCO evaluation code, which needs inputs in the MSCOCO annotation formats, this code can conveniently accept any set of images and sentences in a JSON format and can produce metric outputs.

Important Note

CIDEr by default (with idf parameter set to "corpus" mode) computes IDF values using the reference sentences provided. Thus, CIDEr score for a reference dataset with only 1 image will be zero. When evaluating using one (or few) images, set idf to "coco-val-df" instead, which uses IDF from the MSCOCO Vaildation Dataset for reliable results.

Requirements

  • java 1.8.0
  • python 2.7

For running the ipython notebook file, update your Ipython to Jupyter

Files

./

  • evalscripts.py (demo script)

./PyDataFormat

  • loadData.py (load the json files for references and candidates)

./data

  • Reference and Candidate input JSON files in the following format: List of dict {image_id: "$image_name", caption: "$caption"}

    where $image_name and $caption are strings

In case of multiple sentences for the same image, each sentence needs to be specified in the above format (along with the image_id).

./results

  • results.json (an example of fake results for running demo)

./pycocoevalcap: The folder where all evaluation codes are stored.

  • evals.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
  • tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
  • bleu: Bleu evalutation codes
  • meteor: Meteor evaluation codes
  • rouge: Rouge-L evaluation codes
  • cider: CIDEr-D evaluation codes

Instructions

  1. Edit the params.json file to contain path to reference and candidate json files, and the result file where the scores are stored*.

  2. Set the "idf" value in params.json to "corpus" if not evaluating on a single image/instance. Set the "idf" value to "coco-val-df" if evaluating on a single image. In this case IDF values from the MSCOCO dataset are used. If using some other corpus, get the document frequencies into a similar format as "coco-val-df" (see below), and put them in the data/ folder as a pickle file. Then set mode to the name of the document frequency file (without the '.p' extension).

  3. Sample json reference and candidate files are pascal50S.json and pascal_test.json

  4. All metric scores are stored in "scores" variable: scores['CIDEr'] -> CIDEr scores and so on

## Document Frequency Format## The coco-val-df.p file in ./data contains a dictionary with the following key value pairs: 1. 'df': the frequencies of occurrence for n-grams, where each n-gram is represented by a tuple and is used to index into a dictionary with the values containing the document frequency 2. 'ref_len': the number of documents in the coropus. For the case of image captioning, this will typically be the total number of reference images. For MSCOCO - VAL set this would be 40,504, for instance.

*Even when evaluating with independent candidate/references (for eg. when using "coco-val-df"), put multiple candidate and reference entries into the same json files. This is much faster than having separate candidate and reference files and calling the evaluation code separately on each candidate/reference file.

References

Developers

  • Ramakrishna Vedantam (Virgina Tech)

Acknowledgement

  • MSCOCO Caption Evaluation Team (Xinlei Chen (CMU), Hao Fang (University of Washington), Tsung-Yi Lin (Cornell))