Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I pretrain ELECTRA starting from weights from google ? #26

Closed
richarddwang opened this issue May 15, 2021 · 9 comments
Closed

Comments

@richarddwang
Copy link
Owner

This issue is to answer the question from hugggingface forum.

Although I haven't tried it, it should be possible.

  1. Make sure my_model is set to False to use huggingface model

    'my_model': False, # only for my personal research

  2. Change model(config) -> model.from_pretrained(model_name)

    electra_pytorch/pretrain.py

    Lines 364 to 365 in ab29d03

    generator = ElectraForMaskedLM(gen_config)
    discriminator = ElectraForPreTraining(disc_config)

  3. Be careful about size, max_length, and other configs

    'size': 'small',

    i = ['small', 'base', 'large'].index(c.size)
    c.mask_prob = [0.15, 0.15, 0.25][i]
    c.lr = [5e-4, 2e-4, 2e-4][i]
    c.bs = [128, 256, 2048][i]
    c.steps = [10**6, 766*1000, 400*1000][i]
    c.max_length = [128, 512, 512][i]

Note: ELECTRA models published are actually ++ model described in appendix D, and max sequence length of ELECTRA-Small/ Small++ is 128/512.

Feel free to tag me if you have other questions.

@lucaguarro
Copy link

Thank you for responding to my question. I got it working but am perhaps getting strange results from the training process. It always reports a training loss of 0.000000. Is this just because the model has already been well-trained enough?

Also is it normal for each training epoch to take only 1-2 seconds? Or is this a sign that my dataset that I set up was poorly configured?

Here is a screenshot of the output of the training process
electrapretraindebug

@richarddwang
Copy link
Owner Author

There should be an error caught.

Because fastai didn't support specifying training steps, I wrote an callback myself to do that.
The side effect is it will catch any error we encountered.

So you can comment out this callback, run it again, and you will see the error.
After you resolve the error, you can add it back and do the normal training.

RunSteps(c.steps, [0.0625, 0.125, 0.25, 0.5, 1.0], c.run_name+"_{percent}"),

@lucaguarro
Copy link

Oh perfect thank you. I was getting an error because I added a special token to the tokenizer and needed to notify the generator and discriminator of the new size of the token embeddings.

I am however getting a memory error now. Usually I resolve this by just lowering the batch size but I am not so sure where this is set in your code?

I am using a Nvidia Tesla P100 and this is the error message:

RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 15.90 GiB total capacity; 14.22 GiB already allocated; 291.75 MiB free; 14.73 GiB reserved in total by PyTorch)

Sorry to ask so many questions.

@richarddwang
Copy link
Owner Author

No worries !

Here is where batch size set.

c.bs = [128, 256, 2048][i]

You can change it by c.bs = whatever after it.

@lucaguarro
Copy link

Awesome, I got it working! I did have to lower my batch size all the way down to 32 w/ Google Colab pro though (quite a bit lower than your presets)

On another note, I took notice of your "multi_task.py" file and it interested me for my own research as well but I'll open a new issue so as to not bog this one down

@congchan
Copy link

Side question, How can we pretrain ELECTRA starting from weights from other pretrained models, such as roberta?

@richarddwang
Copy link
Owner Author

There's no direct way to do this.
As a workaround, take generator for example.
You can refer to the source code and write a ElectraForMaskedLMWithAnyModel that takes a pretrained AutoModel instance as an argument.

@JiazhaoLi
Copy link

JiazhaoLi commented Sep 19, 2022

Hi, Thank you for the wonderful code.
I try to continue training based on google ELECTRA checkpoints. I followed the step in this post. I also comment out

RunSteps(c.steps, [0.0625, 0.125, 0.25, 0.5, 1.0], c.run_name+"_{percent}"),

However, I still got the following error, which encountered in fastai learner file. Do you have any hints on this, I appreciate it.

`Traceback (most recent call last):
File "pretrain.py", line 405, in
cbs=[mlm_cb],
.....

File "/home/anaconda3/envs/electra/lib/python3.7/site-packages/fastai/learner.py", line 137, in _call_one
[cb(event_name) for cb in sort_by_run(self.cbs)]
NameError: name 'sort_by_run' is not defined`

I am not sure whether it is due to the package version.

@stvhuang
Copy link

stvhuang commented Dec 16, 2022

Hi @JiazhaoLi.

Do you solve the problem (sort_by_run not found)? I also met the same error recently.

Update:
This error can be solved by downgrading fastcore version to fastcore<=1.3.13.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants