New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I pretrain ELECTRA starting from weights from google ? #26
Comments
Thank you for responding to my question. I got it working but am perhaps getting strange results from the training process. It always reports a training loss of 0.000000. Is this just because the model has already been well-trained enough? Also is it normal for each training epoch to take only 1-2 seconds? Or is this a sign that my dataset that I set up was poorly configured? |
There should be an error caught. Because fastai didn't support specifying training steps, I wrote an callback myself to do that. So you can comment out this callback, run it again, and you will see the error. Line 394 in ab29d03
|
Oh perfect thank you. I was getting an error because I added a special token to the tokenizer and needed to notify the generator and discriminator of the new size of the token embeddings. I am however getting a memory error now. Usually I resolve this by just lowering the batch size but I am not so sure where this is set in your code? I am using a Nvidia Tesla P100 and this is the error message:
Sorry to ask so many questions. |
No worries ! Here is where batch size set. Line 79 in ab29d03
You can change it by |
Awesome, I got it working! I did have to lower my batch size all the way down to 32 w/ Google Colab pro though (quite a bit lower than your presets) On another note, I took notice of your "multi_task.py" file and it interested me for my own research as well but I'll open a new issue so as to not bog this one down |
Side question, How can we pretrain ELECTRA starting from weights from other pretrained models, such as roberta? |
There's no direct way to do this. |
Hi, Thank you for the wonderful code.
However, I still got the following error, which encountered in fastai learner file. Do you have any hints on this, I appreciate it. `Traceback (most recent call last): File "/home/anaconda3/envs/electra/lib/python3.7/site-packages/fastai/learner.py", line 137, in _call_one I am not sure whether it is due to the package version. |
Hi @JiazhaoLi. Do you solve the problem (sort_by_run not found)? I also met the same error recently. Update: |
This issue is to answer the question from hugggingface forum.
Although I haven't tried it, it should be possible.
Make sure
my_model
is set toFalse
to use huggingface modelelectra_pytorch/pretrain.py
Line 43 in ab29d03
Change
model(config)
->model.from_pretrained(model_name)
electra_pytorch/pretrain.py
Lines 364 to 365 in ab29d03
Be careful about
size
,max_length
, and other configselectra_pytorch/pretrain.py
Line 38 in ab29d03
electra_pytorch/pretrain.py
Lines 76 to 81 in ab29d03
Note: ELECTRA models published are actually ++ model described in appendix D, and max sequence length of ELECTRA-Small/ Small++ is 128/512.
Feel free to tag me if you have other questions.
The text was updated successfully, but these errors were encountered: