Skip to content
Source code for paper: Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
dicts MOD: gec code. May 4, 2019
docs MOD: init with fairseq code. May 4, 2019
examples MOD: init with fairseq code. May 4, 2019
fairseq MOD: gec code. May 4, 2019
fairseq_cli MOD: init with fairseq code. May 4, 2019
gec_scripts MOD: gec code. May 4, 2019
scripts MOD: init with fairseq code. May 4, 2019
tests MOD: init with fairseq code. May 4, 2019
CONTRIBUTING.md MOD: init with fairseq code. May 4, 2019
LICENSE MOD: init with fairseq code. May 4, 2019
PATENTS MOD: init with fairseq code. May 4, 2019
README.md ADD: the download url of the pre-trained model. May 14, 2019
README_FAIRSEQ.md MOD: gec code. May 4, 2019
align.sh MOD: gec code. May 4, 2019
arch.jpg MOD: gec code. May 4, 2019
config.sh MOD: gec code. May 4, 2019
eval_lm.py MOD: init with fairseq code. May 4, 2019
fairseq.gif MOD: init with fairseq code. May 4, 2019
fairseq_logo.png MOD: init with fairseq code. May 4, 2019
generate.py MOD: gec code. May 4, 2019
generate.sh MOD: gec code. May 4, 2019
interactive.py MOD: gec code. May 4, 2019
interactive.sh MOD: gec code. May 4, 2019
noise.sh MOD: gec code. May 4, 2019
noise_data.py MOD: gec code. May 4, 2019
preprocess.py MOD: gec code. May 4, 2019
preprocess.sh MOD: gec code. May 4, 2019
preprocess_noise_data.sh MOD: gec code. May 4, 2019
pretrain.sh MOD: gec code. May 4, 2019
score.py MOD: init with fairseq code. May 4, 2019
setup.py MOD: init with fairseq code. May 4, 2019
train.py MOD: gec code. May 4, 2019
train.sh ADD: the download url of the pre-trained model. May 14, 2019

README.md

Introduction

Source code for the paper: Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data Authors: Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu Arxiv url: https://arxiv.org/abs/1903.00138 Comments: Accepted by NAACL 2019 (oral)

Dependecies

  • PyTorch version >= 1.0.0
  • Python version >= 3.6

Downloads

  • Download CoNLL-2014 evaluation scripts
cd gec_scripts/
sh download.sh

Train with the pre-trained model

cd fairseq-gec
pip install --editable
sh train.sh \${device_id} \${experiment_name}

Train without the pre-trained model

Modify train.sh to train without the pre-trained model

  • delete parameter "--pretrained-model"
  • change the value of "--max-epoch" to 15 (more epochs are needed without pre-trained parameters)

Evaluate on the CoNLL-2014 test dataset

sh g.sh \${device_id} \${experiment_name}

Get pre-trained models from scratch

We have public our pre-trained models as mentioned in the downloads part. We list the steps here, in case someone want to get the pre-trained models from scratch.

1. # prepare target sentences using one billion benchmark dataset
2. sh noise.sh # generate the noised source sentences 
3. sh preprocess_noise_data.sh # preprocess data
4. sh pretrain.sh 0,1 _pretrain # pretrain 

Acknowledgments

Our code was modified from fairseq codebase. We use the same license as fairseq(-py).

Citation

Please cite as:

@article{zhao2019improving,
  title={Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data},
    author={Zhao, Wei and Wang, Liang and Shen, Kewei and Jia, Ruoyu and Liu, Jingming},
      journal={arXiv preprint arXiv:1903.00138},
        year={2019}
}
You can’t perform that action at this time.