You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am trying to further pretrain the base model and large model use domain-specific corpus. But I see in the document, it says that when continuing pre-training from the released small ELECTRA checkpoints, we should:
Setting num_train_steps by (for example) adding "num_train_steps": 4010000 to the --hparams. This will continue training the small model for 10000 more steps (it has already been trained for 4e6 steps).
But Table 6 of the paper shows that small ELECTRA model is trained for 1M steps. Which one should we set?
If 4e6 is correct, how many steps has the base model or large model been trained?
The text was updated successfully, but these errors were encountered:
For the ELECTRA Small, the trained steps could be 4e6.
Because when I tested with num_train_steps =< 4e6, the model was not trained (because it was already trained with that number of steps). And it started to be trained with num_trained_steps >= 4000001.
Hello, I am trying to further pretrain the base model and large model use domain-specific corpus. But I see in the document, it says that when continuing pre-training from the released small ELECTRA checkpoints, we should:
But Table 6 of the paper shows that small ELECTRA model is trained for 1M steps. Which one should we set?
If 4e6 is correct, how many steps has the base model or large model been trained?
The text was updated successfully, but these errors were encountered: