Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LayerNorm/beta not found in checkpoint #23

Closed
threefoldo opened this issue Dec 8, 2020 · 1 comment
Closed

LayerNorm/beta not found in checkpoint #23

threefoldo opened this issue Dec 8, 2020 · 1 comment

Comments

@threefoldo
Copy link

I tried with several pretrained models, including: google-bert-base-en, google-albert-base-en, pai-albert-base-en, they all have the same kind of error:

Key bert_pre_trained_model/bert/encoder/layer_10/attention/output/LayerNorm/beta not found in checkpoint
        [[node save/RestoreV2 (defined at /home/jinzy/.conda/envs/et/lib/python3.6/site-packages/easytransfer/engines/model.py:658)  = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT64], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
@threefoldo
Copy link
Author

After empty the 'checkpointDir', it can train without errors. It seems like that every time I changed the pretrained model, the checkpointDir should be emptied, otherwise it will report this error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant