Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The exact English pretraining data and Chinese pretraining data that are exact same to the BERT paper's pretraining data. #12

Closed
guotong1988 opened this issue Mar 23, 2021 · 1 comment

Comments

@guotong1988
Copy link

Any one know where to get them?
Thank you and thank you.

@StillKeepTry
Copy link
Contributor

Generally, we need to crawl Wikipedia + bookcorpus by myself. The below are some scripts for crawling:

https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/data/create_datasets_from_start.sh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants