This repository has been forked from the pycocoevalcap repository. The developed scripts added to this initial repository are metrics_compute.py and visualize_captions.py. To use these, you need to add a 'res_file' folder with all the JSON results files inside organised by foler.
For instance if you used a captioning model with 1 Cross-Attention layer trained on 3 epochs, you need to add a 'res_files/1ca_ep3' folder and add the JSON file inside.
Evaluation codes for MS COCO caption generation.
This repository provides Python 3 support for the caption evaluation metrics used for the MS COCO dataset.
The code is derived from the original repository that supports Python 2.7: https://github.com/tylin/coco-caption.
Caption evaluation depends on the COCO API that natively supports Python 3.
- Java 1.8.0
- Python 3
Run the following script: metrics_compute.py
./
- metrics_compute.py : script generating the metrics computations based on the JSON files in the added folders as explained above
- visualize_captions.ipynb : a jupyter-notebook files aiming to display some example captions from a specific JSON file
- Microsoft COCO Captions: Data Collection and Evaluation Server
- PTBTokenizer: We use the Stanford Tokenizer which is included in Stanford CoreNLP 3.4.1.
- BLEU: BLEU: a Method for Automatic Evaluation of Machine Translation
- Meteor: Project page with related publications. We use the latest version (1.5) of the Code. Changes have been made to the source code to properly aggreate the statistics for the entire corpus.
- Rouge-L: ROUGE: A Package for Automatic Evaluation of Summaries
- CIDEr: CIDEr: Consensus-based Image Description Evaluation
- SPICE: SPICE: Semantic Propositional Image Caption Evaluation
- Xinlei Chen (CMU)
- Hao Fang (University of Washington)
- Tsung-Yi Lin (Cornell)
- Ramakrishna Vedantam (Virgina Tech)
- David Chiang (University of Norte Dame)
- Michael Denkowski (CMU)
- Alexander Rush (Harvard University)