Skip to content

Human judgement on the word-level quality estimation for the machine translation

Notifications You must be signed in to change notification settings

ZhenYangIACAS/HJQE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

HJQE: Human judgement on the word-level quality estimation for the machine translation

Data


HJQE is a benchmark dataset for word-level quality estimation (QE) of machine translation, where all examples are annotated by expert translators. The goal of the dataset is to measure the translation errors from the human judgement.

HJQE contains the corpus for two translation directions: English-German and English-Chinese. For each corpus, the source and mt sentences are same to WMT20 (https://www.statmt.org/wmt20/quality-estimation-task.html).

Citation


Please cite the following paper if you found the resources in this repository useful.

@article{yang2021HJQE,
  title={Rethink about the Word-level Quality Estimation for Machine Translation from Human Judgement},
  author={Yang Zhen, Meng Fandong, Yuanmeng Yan, and Zhou, Jie},
  year={2022}
}

About

Human judgement on the word-level quality estimation for the machine translation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published