Skip to content

A unified framework for recent evaluation metrics about natural language generation.

Notifications You must be signed in to change notification settings

yuhui-zh15/nlg_metrics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NLG Evaluation Metrics

A unified framework for recent evaluation metrics about natural language generation.

Work in progress.

Installation

git clone git@github.com:yuhui-zh15/nlg_metrics.git
cd nlg_metrics
pip install -e .
python -m pytest tests/

Get Started

>>> from nlg_metrics import RougeScorer
>>> scorer = RougeScorer()
>>> scores = scorer.score(['This is a test sentence.'], ['This is another test sentence.'])
>>> print(scores)

Progress

Metric Progress Paper
ROUGE COMPLETE ROUGE: A Package for Automatic Evaluation of Summaries
BERTScore COMPLETE BERTScore: Evaluating Text Generation With BERT
FactScore COMPLETE Evaluating the Factual Correctness for Abstractive Summarization
MoverScore TODO MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance
BLEU TODO BLEU: a Method for Automatic Evaluation of Machine Translation
METEOR TODO METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

About

A unified framework for recent evaluation metrics about natural language generation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages