New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
__init__() missing 2 required positional arguments: 'configs' and 'tokenizer' #86
Comments
@hasangchun Can you check? |
Thanks for letting me know. I'll fix it as soon as I have time. |
Thanks, the tokennizer seems can't be pickled. when I tried to add self.save_hyperparameters() in OpenspeechModel.__init()__, it reported an error. |
I think you can override on_save_checkpoint or on_load_checkpoint like what I did to solve this bug. But it is not very elegant. |
I have the same problem, and there are many problems need to modify when I use (configs=configs, ......,tokenizer=None) . I |
Hi, I'm trying to load an already-trained model doing something like this: model = MODEL_REGISTRY[configs.eval.model_name]
model = model.load_from_checkpoint(configs.eval.checkpoint_path, configs=configs, tokenizer=tokenizer) However, I get this error (for all the layers): RuntimeError: Error(s) in loading state_dict for ConformerLSTMModel:
Unexpected key(s) in state_dict: "encoder.conv_subsample.conv.sequential.0.weight ... Have you guys managed to load a pre-trained model @wuxiuzhi738 @yinruiqing ? Am I doing something wrong?? Thank you a lot |
Just realized that the model didn't have those layers because the |
Hi, I made fixes based on the above tips and evaluation works, but I'm getting much worse results on the validation data set than during training on the same validation dataset. Is it using somehow ground truth during training validation evaluation that causes better results during training validation? |
when run openspeech_cli.hydra_eval.py, load_from_checkpoint method report this error:
I think it is caused by that the checkpoit doesn't pickle 'configs' and 'tokennizer' params in model class.
The text was updated successfully, but these errors were encountered: