Skip to content

Automatic Error Generation for Machine Translation Evaluation

Notifications You must be signed in to change notification settings

HLT-MAIA/ErrorIST

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 

Repository files navigation

ErrorIST

ErrorIST is a framework dedicated to the task of adding fine-grained error types to text, and, as much as possible, to automatically evaluate an editor's proficiency, which is usually a manual and expensive task

ErrorIST implements common error types, identified in frameworks such as the Multidimensional Quality Metric (MQM), and in taxonomies dedicated to translation errors such as the errors presented in Costa, et. al. 2015.

The following table summarizes the taxonomy of errors supported by ErrorIST:

  • Punctuation (omission and addition)(ex: I found , the clowns, Bob, and Clyde.);
  • Capitalisation (ex: i think my poor Slipper got dirty!);
  • Spelling (ex: I have three friends.);
  • Omission (content or function words) (ex: His hat was []);
  • Addition (content or function words) (ex: He bought a already hat.).
  • Misselection
    • Word-class (ex: The cutely bird is on the branch.);
    • Verbs (tense, person or both) (ex: He had buy a suit-case.);
    • Agreement (gender, number or both; or person) (ex: Os lobo fugiu./The Wolf run away. In Portuguese, Os is plural and lobo singular.);
    • Contraction (ex: Ela senta-se em a cadeira./She seats in the chair. In Portuguese, em + a = na.).
  • Misordering (ex: I beautiful like the [] color of your eyes.).
  • "Confusion of senses": when a word was translated into one of its possible meanings, but, in the given context, not the correct one.

ErrorIST Architecture:

ErrorIST is composed of three modules: Error Generator, Tracer, and Evaluator: errorist (1)

Error Generator generates the errors, Tracer analyses how the generated errors were corrected and Evaluator grades the modifications detected by Tracer; if it is not possible to automatically evaluate an error, a manual evaluation is requested.

CODE 2.0

A second version of the code can be downloaded here

Bibliography

Two thesis contributed to the building of ErrorIST. They were developed by Tiago Santos and Raquel Cristovão, and can be found here and here.

About

Automatic Error Generation for Machine Translation Evaluation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published