Skip to content

AI4Bharat/IndicMT-Eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IndicMT-Eval

This repository contains the code for the paper "IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation Metrics for Indian Languages" to appear at ACL 2023

Contents

Overview

We contribute a Multidimensional Quality Metric (MQM) dataset for Indian languages created by taking outputs generated by 7 popular MT systems and asking human annotators to judge the quality of the translations using the MQM style guidelines. Using this rich set of annotated data, we show the performance of 16 metrics of various types on evaluating en-xx translations for 5 Indian languages. We provide an updated metric called Indic-COMET which not only shows stronger correlations with human judgement on Indian languages, but is also more robust to perturbations.

Please find more details of this work in our paper (link coming soon).

MQM Dataset

The MQM annotated dataset collected with the help of language experts for the 5 Indian lamguages (Hindi, Tamil, Marathi, Malayalam, Gujarati) can be downloaded from here (link coming soon).

An example of an MQM annotation containing the source, reference and the translated output with error spans as demarcated by the annotator looks like the following: MQM-example

More details regarding the instructions provided and the procedures followed for annotations are present in the paper.

Setup

Load the data

The easiest method to access / view the data is to visit this link More details in data folder

cd data

Indic Comet

We load the pretrained encoder and initialize it with either XLM-Roberta, COMET-DA or COME-MQM weights. During training, we divide the model parameters into two groups: the encoder parameters, that include the encoder model and the regressor parameters, that include the parameters from the top feed-forward network. We apply gradual unfreezing and discriminative learning rates, meaning that the encoder model is frozen for one epoch while the feed-forward is optimized with a learning rate. After the first epoch, the entire model is fine-tuned with a different learning rate. Since we are fine-tuning on a small dataset, we make use of early stopping with a patience of 3. The best saved checkpoint is decided using the overall Kendall-tau correlation on the test set. We use the COMET repository for training and our checkpoints are compatible with their setup.

Download the best checkpoint here

MQM DA
indic-comet-mqm indic-comet-da
hparams.yaml hparamas.yaml

Other Metrics

We followed the implementation of metrics with the help of the following repositories: For BLEU, METEOR, ROUGE-L, CIDEr, Embedding Averaging, Greedy Matching, and Vector Extrema, we use the implementation provided by Sharma et al. (2017). For chrF++, TER, BERTScore, and BLEURT, we use the repository of Castro Ferreira et al. (2020). For SMS, WMDo, and Mover-Score, we use the implementation provided by Fabbri et al. (2020). For all the remaining task-specific metrics, we use the official codes from the respective papers.


The python file code/evaluate.py runs all of these metrics on the given dataset.

Citation

If you find IndicMTEval useful in your research or work, please consider citing our paper.

@article{DBLP:journals/corr/abs-2212-10180,
  author       = {Ananya B. Sai and
                  Tanay Dixit and
                  Vignesh Nagarajan and
                  Anoop Kunchukuttan and
                  Pratyush Kumar and
                  Mitesh M. Khapra and
                  Raj Dabre},
  title        = {IndicMT Eval: {A} Dataset to Meta-Evaluate Machine Translation metrics
                  for Indian Languages},
  journal      = {CoRR},
  volume       = {abs/2212.10180},
  year         = {2022}
}

About

IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation Metrics for Indian Languages, ACL 2023

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •