Skip to content
This repository has been archived by the owner on Nov 1, 2023. It is now read-only.

gabrielsantosrv/coco-caption---My-changes

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Microsoft COCO Caption Evaluation

Evaluation codes for MS COCO caption generation.

Requirements

  • java 1.8.0
  • python 2 or 3
    • gensim

Files

./

  • cocoEvalCapDemo.py (demo script)

./annotation

  • captions_val2014.json (MS COCO 2014 caption validation set)
  • Visit MS COCO download page for more details.

./results

  • captions_val2014_fakecap_results.json (an example of fake results for running demo)
  • Visit MS COCO format page for more details.

./pycocoevalcap: The folder where all evaluation codes are stored.

  • evals.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
  • tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
  • bleu: Bleu evalutation codes
  • meteor: Meteor evaluation codes
  • rouge: Rouge-L evaluation codes
  • cider: CIDEr evaluation codes
  • spice: SPICE evaluation codes
  • wmd: Word Mover's Distance evaluation codes

Setup

  • You will first need to download the Stanford CoreNLP 3.6.0 code and models for use by SPICE. To do this, run: bash get_stanford_models.sh
  • Note: SPICE will try to create a cache of parsed sentences in ./pycocoevalcap/spice/cache/. This dramatically speeds up repeated evaluations. The cache directory can be moved by setting 'CACHE_DIR' in ./pycocoevalcap/spice. In the same file, caching can be turned off by removing the '-cache' argument to 'spice_cmd'.
  • You will also need to download the Google News negative 300 word2vec model for use by WMD. To do this, run: bash get_google_word2vec_model.sh

AllSPICE

AllSPICE is a metric measuring both diversity and accuracy of a generated caption set. This is proposed in Analysis of diversity-accuracy tradeoff in image captioning.

See cocoEvalAllSPICEDemo.ipynb to learn how to use it.

You can also check out ruotianluo/self-critical.pytorch/eval_multi.py to see how it is used in practice and ruotianluo/SPICE to see what change was made to the original SPICE code to realize AllSPICE.

References

Also,

Developers

  • Xinlei Chen (CMU)
  • Hao Fang (University of Washington)
  • Tsung-Yi Lin (Cornell)
  • Ramakrishna Vedantam (Virgina Tech)

Acknowledgement

  • David Chiang (University of Norte Dame)
  • Michael Denkowski (CMU)
  • Alexander Rush (Harvard University)
  • Mert Kilickaya (Hacettepe University)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 85.8%
  • Python 14.0%
  • Shell 0.2%