Skip to content
Adds SPICE metric to coco-caption evaluation server codes
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
annotations remove debug file Mar 20, 2015
pycocoevalcap Make it easier to overload cider_scorer May 3, 2018
pycocotools new COCO Caption evaluation api Mar 16, 2015
.gitignore Add notice - no longer maintained Apr 23, 2018
cocoEvalCapDemo.ipynb Update IPython demo and remove html and .py file for demo Aug 24, 2015 Added SPICE metric for evaluation on coco challenge 2015 Jul 7, 2016
license.txt fix year in license text Mar 19, 2015

Microsoft COCO Caption Evaluation

Evaluation codes for MS COCO caption generation.

No longer maintained. The SPICE metric has been incorporated into the official COCO caption evaluation code, so this repo is no longer maintained.


  • java 1.8.0
  • python 2.7



  • (demo script)


  • captions_val2014.json (MS COCO 2014 caption validation set)
  • Visit MS COCO download page for more details.


  • captions_val2014_fakecap_results.json (an example of fake results for running demo)
  • Visit MS COCO format page for more details.

./pycocoevalcap: The folder where all evaluation codes are stored.

  • The file includes COCOEavlCap class that can be used to evaluate results on COCO.
  • tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
  • bleu: Bleu evalutation codes
  • meteor: Meteor evaluation codes
  • rouge: Rouge-L evaluation codes
  • cider: CIDEr evaluation codes
  • spice: SPICE evaluation codes


  • You will first need to download the Stanford CoreNLP 3.6.0 code and models for use by SPICE. To do this, run: ./



  • Xinlei Chen (CMU)
  • Hao Fang (University of Washington)
  • Tsung-Yi Lin (Cornell)
  • Ramakrishna Vedantam (Virgina Tech)


  • David Chiang (University of Norte Dame)
  • Michael Denkowski (CMU)
  • Alexander Rush (Harvard University)
You can’t perform that action at this time.