Skip to content

This repository contains the data and models from the paper '“Is Whole Word Masking Always Better for Chinese BERT?”: Probing on Chinese Grammatical Error Correction', which is accepted in Findings of ACL 2022, short.

daiyongya/CLMAndWWM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CLMAndWWM

This repository contains the data and models from the paper '“Is Whole Word Masking Always Better for Chinese BERT?”: Probing on Chinese Grammatical Error Correction', which is accepted in Findings of ACL 2022, short. The paper link is here.

The data was obtained from CGED's benchmark for Chinese Grammatical Error Diagnosis and subsequently processed by us. The checkpoit link of RoBERTa is here, you can load the model using hugginface interfaces.

About

This repository contains the data and models from the paper '“Is Whole Word Masking Always Better for Chinese BERT?”: Probing on Chinese Grammatical Error Correction', which is accepted in Findings of ACL 2022, short.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published