Skip to content


Folders and files

Last commit message
Last commit date

Latest commit



14 Commits

Repository files navigation


Automatic text metrics---BLEU, ROUGE, and METEOR, plus extras like vocab and ngrams.


# Compares each candidate (c) separately against all references (r).
python -m textmetrics.main c1.txt c2.txt --references r1.txt r2.txt r3.txt



  • Perl (for BLEU)
  • Java 1.8 (for METEOR)
  • Python 3.6+
pip install textmetrics


  • BLEU


BLEU and METEOR use the refernce implementations (in Perl and Java, respectively). We originally used the reference Perl implementation for ROUGE as well, but it ran so slowly that we opted for a Python reimplementation instead. (ROUGE's original Perl implementation is also more difficult to setup, even with wrapper libraries.)


  • pypi

  • API support (possible to have interface for passing strings?)

  • ROUGE crashes things if it decides there aren't sentences (e.g., run with as input and reference)

  • Add back in orig ROUGE for completeness (place behind switch)

  • BLEU perl script fails if the filename ends in gz because it tries to un-gzip it, which happens eventually when creating a lot of files. we should wrap the filename creation so this doesn't happen

  • ngrams has divide by zero error. With two simple files (two lines each, same first line, differing second line) running with 2.txt --references 1.txt 1.txt triggered this divide by zero

  • Demo + guide for better README (should cover file + API usage)

  • Tests

  • Early check in each module for whether program runnable + nice error message (e.g., no java or bad version, no perl or bad version, etc.)

Note to self: I followed this guide for packaging to pypi, and future uploads will probably look like:

# (1) ensure tests pass

# (2) bump version in

# (3) commit + push to github

# (4) generate distribution
python sdist bdist_wheel

# (5) Upload
twine upload dist/*