evaluation suite for testing automatic grammatical error corrections
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
codalab
heilman-et-al
references
README.md

README.md

Metrics for evaluating grammatical error corrections

These metrics were used in Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. There's No Comparison: Reference-less Evaluation Metrics in Grammatical Error Correction. EMNLP 2016

If you use this code or the accompanying CodaLab evaluation, please cite:

@InProceedings{napoles-sakaguchi-tetreault:2016:EMNLP2016,
  author    = {Napoles, Courtney  and  Sakaguchi, Keisuke  and  Tetreault, Joel},
  title     = {There's No Comparison: Reference-less Evaluation Metrics in Grammatical Error Correction},
  booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing},
  month     = {November},
  year      = {2016},
  address   = {Austin, Texas},
  publisher = {Association for Computational Linguistics},
  pages     = {2109--2115},
  url       = {https://aclweb.org/anthology/D16-1228}
}

Online evaluation

The CodaLab evaluation can be used for to evaluate grammatical error correcions of the CoNLL-2014 shared task test set.

https://competitions.codalab.org/competitions/15475

Contents

  1. codalab/
    • Code for evaluating GEC output of the CoNLL 2014 test set using combination of metrics and reference sets.
    • The platform for scoring output can be found at https://competitions.codalab.org/competitions/15475
    • This contains the an error-count method using Language Tool and interpolations of LT with existing GEC metrics GLEU, I-measure, and M2.
  2. heilman-et-al/