Evaluation code for various unsupervised automated metrics for Natural Language Generation.
-
Updated
Aug 20, 2024 - Python
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
Well tested & Multi-language evaluation framework for text summarization.
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
A python3 library for evaluating caption's BLEU, Meteor, CIDEr, SPICE,ROUGE_L,WMD score. Fork from https://github.com/ruotianluo/coco-caption
Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.
Machine Translation (MT) Evaluation Scripts
MAchine Translation Evaluation Online (MATEO)
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation
BLEU Score in Rust
Automatic text metrics (BLEU, ROUGE, METEOR, +++)
Corpus level and sentence level BLEU calculation for machine translation
Image caption generator is a task that involves computer vision and natural language processing concepts to recognize the context of an image and describe them in a natural language like English.
Neural Dialogue Generation Benchmarks implemented TensorFlow 2.0
Built a classifier for evaluating quality of machine translation to predict best matching sentence to the reference sentence
Add a description, image, and links to the bleu topic page so that developers can more easily learn about it.
To associate your repository with the bleu topic, visit your repo's landing page and select "manage topics."