New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'adam_m not found in checkpoint ' when further pretraining #45
Comments
I also had the same problem. It seems that the adam_m parameter was removed from the checkpoint before saving. (google-research/bert#99 (comment)) So if we don't have the full checkpoint, we can't do further training. Just waiting for the full checkpoint. |
You should be able to do further training, just don't initialize the Adam parameters from the checkpoint by doing something like this. I don't think refreshing the Adam parameters will cause any real problem with the model. |
I meet the same problem |
Hello, I checked the solution @clarkkev mentioned above but still don't know the exact solution. Can anyone provide any further help? I am new to tensorflow and did not find anywhere skip the adam_m parameters from the link above. Thank you in advance. |
I have the same trouble.Could you tell me how to fix it? @clarkkev @lincoln-jiang @w5688414 Thank you in advance. |
@Veyronl I just gave up using Electra. |
When I was trying further pretraining on the models with domain-specific data in Colab, I encountered a problem that the official pretrained model could not be loaded.
Here is the commend for further pretraining.
And the error message is pretty long so I just paste some of it that seems useful.
The text was updated successfully, but these errors were encountered: