aaron-project-ebleu: Quality Estimation for Machine Translation Using the Joint Method of Evaluation Criteria and Statistical Modeling
EBLEU: Quality Estimation for Machine Translation Using the Method of Evaluation Criteria
Welcome to the aaron-project-ebleu! Open source for research only.
EBLEU is a language independent machine translation evaluation metric with the factors of modified length penalty (MLP), n-gram Precision and n-gram Recall. EBLEU is first proposed in the ACL-WMT13 Quality Estimation Tasks (http://www.statmt.org/wmt13/quality-estimation-task.html). Quality estimation is the advanced technology of machine translation evaluation without using reference translations. EBLEU tries to evaluate the translation quality (cost of post editing time) by the exploring of traditional evaluation criteria and some linguistic features, such as the Part-of-Speech. Experiment reports on ACL-WMT13 corpora (Task 1.1 Scoring and ranking for post-editing effort, English-Spanish) show EBLEU yields acceptable scores. Mean-Average-Error (MAE) and Root-Mean-Squared-Error (RMSE) scores of EBLEU are 16.97 and 21.94 respectively.
Detailed knowledge of EBLEU is shown in the paper "Quality Estimation for Machine Translation Using the Joint Method of Evaluation Criteria and Statistical Modeling" by Aaron Li-Feng Han, Yi Lu, Derek F. Wong, Lidia S. Chao, Liangye He, and Junwen Xing. Proceedings of the ACL Eighth Workshop on Statistical Machine Translation, pages 365–372, Soﬁa, Bulgaria, August 8-9, 2013. Association for Computational Linguistics. (http://www.statmt.org/wmt13/pdf/WMT45.pdf). If you use the EBLEU metric in your researches, please cite the paper.
Contact: hanlifengaaron AT gmail.com