-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training from scratch with other datasets (other languages) #3
Comments
Hi @dumitrescustefan ,
You can try explore datasets first : https://huggingface.co/datasets |
Thanks for the quick response. By electra implementation from HF I mean the electra-trainer branch from HF (the only trainer I managed to get to work) that's in the PR for a long time. What I am trying to do is to pretrain electra (small for now) on a different dataset (and other language). By less than desired results I mean that I am getting a rather poor performance (more than 20 points below a pretrained bert on the same dataset, though this was on an electra checkpoint with only 150K steps - imho the difference should be much smaller, even for only 150K steps with batchsize 128). So, given the facts that you identified that bug in the code, plus that it is pretty cumbersome to use electra-trainer, I was wondering if you plan to edit your code such as to allow an external txt file to serve as the training corpus. (basically what HF's transformer classes LineByLineDataset and the DataCollator are doing now to allow to train on any text). I will try your suggestion with the HF/nlp to load a local dataset, and I'll come back with a status update. That should skip the need of the LineByLineDataset and others. Thanks! |
Best wishes for you ! |
Hi! Thanks @richarddwang for the reimplementation. For some time I am getting less than desired results with the official huggingface electra implementation. Would you consider adding support for pretraining on other datasets (meaning other languages)? Right now it's just the wiki and books from /nlp.
Thanks!
The text was updated successfully, but these errors were encountered: