Evaluation code for various unsupervised automated metrics for Natural Language Generation.
-
Updated
Mar 15, 2024 - Python
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Well tested & Multi-language evaluation framework for text summarization.
Machine Translation (MT) Evaluation Scripts
A python3 library for evaluating caption's BLEU, Meteor, CIDEr, SPICE,ROUGE_L,WMD score. Fork from https://github.com/ruotianluo/coco-caption
Built a classifier for evaluating quality of machine translation to predict best matching sentence to the reference sentence
MAchine Translation Evaluation Online (MATEO)
Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.
Corpus level and sentence level BLEU calculation for machine translation
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation
Image caption generator is a task that involves computer vision and natural language processing concepts to recognize the context of an image and describe them in a natural language like English.
Neural Dialogue Generation Benchmarks implemented TensorFlow 2.0
Automatic text metrics (BLEU, ROUGE, METEOR, +++)
BLEU Score in Rust
Add a description, image, and links to the bleu topic page so that developers can more easily learn about it.
To associate your repository with the bleu topic, visit your repo's landing page and select "manage topics."