Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KeyError: ‘1.decoder.bias’ in convert_weights_with_prefix #20

Closed
abedkhooli opened this issue Dec 3, 2018 · 2 comments · Fixed by fastai/fastai#1284
Closed

KeyError: ‘1.decoder.bias’ in convert_weights_with_prefix #20

abedkhooli opened this issue Dec 3, 2018 · 2 comments · Fixed by fastai/fastai#1284
Assignees

Comments

@abedkhooli
Copy link

abedkhooli commented Dec 3, 2018

While running the train_clas.py script, I get
convert_weights_with_prefix. I also tried the refactored version of the same fn.

    dec_bias, enc_wgts = wgts[prefix+'1.decoder.bias'], wgts[prefix+'0.encoder.weight']
KeyError: '1.decoder.bias'

If run with fine_tune off, there is another error:
Error(s) in loading state_dict for SequentialRNN:
Missing key(s) in state_dict: "0.fwd_lm.encoder.weight", "0.fwd_lm.encoder_dp.emb.weight", ...
Unexpected key(s) in state_dict: "fwd_lm.0.encoder.weight", "fwd_lm.0.encoder_dp.emb.weight", ...

@PiotrCzapla
Copy link
Member

I've noticed that it is due to the changes to the fastai, I will push a fix soon enough, stay tuned

@abedkhooli
Copy link
Author

Great. I think the fix did it. I managed to test pretrain_lm on Arabic wiki tokens. The model (only 1 cycle) was read successfully by fastai's language_model_learner (implemented IMDB sample lesson - using machine translation to convert texts to Arabic). Will work on train_clas next.
As for drop_mult, may be it can be passed as a hyberparameter (the 0.1 default fits large tokens).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants