Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training from scratch with other datasets (other languages) #3

Closed
dumitrescustefan opened this issue Sep 8, 2020 · 3 comments
Closed

Comments

@dumitrescustefan
Copy link

dumitrescustefan commented Sep 8, 2020

Hi! Thanks @richarddwang for the reimplementation. For some time I am getting less than desired results with the official huggingface electra implementation. Would you consider adding support for pretraining on other datasets (meaning other languages)? Right now it's just the wiki and books from /nlp.

Thanks!

@richarddwang
Copy link
Owner

Hi @dumitrescustefan ,

  1. Could I ask you what do you mean electra implementation ? Do you mean the model architectures, hosted pretrained model, or the electra trainer that in a pr for a long time ?

  2. Also I am wondering did you "get less than desired results" with this implementation so you want to try with different data ? If so, there's might be something I should do or could help.

  3. I am glad that you like this. But this project is actually for my personal research, and I spent unexpectedly too much time on it. So currently there is no plan to add data for other language or improve the user interface.

You can try explore datasets first : https://huggingface.co/datasets
Or try to use your own dataset in hf/nlp: https://huggingface.co/nlp/loading_datasets.html#from-local-files
If you have problems applying your hf/nlp datasets to this implementation, you can open a issue and I try to help you.

@dumitrescustefan
Copy link
Author

Thanks for the quick response. By electra implementation from HF I mean the electra-trainer branch from HF (the only trainer I managed to get to work) that's in the PR for a long time. What I am trying to do is to pretrain electra (small for now) on a different dataset (and other language). By less than desired results I mean that I am getting a rather poor performance (more than 20 points below a pretrained bert on the same dataset, though this was on an electra checkpoint with only 150K steps - imho the difference should be much smaller, even for only 150K steps with batchsize 128).

So, given the facts that you identified that bug in the code, plus that it is pretty cumbersome to use electra-trainer, I was wondering if you plan to edit your code such as to allow an external txt file to serve as the training corpus. (basically what HF's transformer classes LineByLineDataset and the DataCollator are doing now to allow to train on any text).

I will try your suggestion with the HF/nlp to load a local dataset, and I'll come back with a status update. That should skip the need of the LineByLineDataset and others. Thanks!

@richarddwang
Copy link
Owner

Best wishes for you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants