Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

__init__() missing 2 required positional arguments: 'configs' and 'tokenizer' #86

Closed
wl-junlin opened this issue Aug 31, 2021 · 8 comments · Fixed by #145
Closed

__init__() missing 2 required positional arguments: 'configs' and 'tokenizer' #86

wl-junlin opened this issue Aug 31, 2021 · 8 comments · Fixed by #145
Assignees
Labels
BUG Something isn't working QUESTION Further information is requested

Comments

@wl-junlin
Copy link

wl-junlin commented Aug 31, 2021

when run openspeech_cli.hydra_eval.py, load_from_checkpoint method report this error:

__init__() missing 2 required positional arguments: 'configs' and 'tokenizer'

I think it is caused by that the checkpoit doesn't pickle 'configs' and 'tokennizer' params in model class.

@sooftware sooftware added BUG Something isn't working QUESTION Further information is requested labels Aug 31, 2021
@sooftware
Copy link
Member

@hasangchun Can you check?

@upskyy
Copy link
Member

upskyy commented Aug 31, 2021

Thanks for letting me know. I'll fix it as soon as I have time.

@wl-junlin
Copy link
Author

Thanks for letting me know. I'll fix it as soon as I have time.

Thanks, the tokennizer seems can't be pickled. when I tried to add self.save_hyperparameters() in OpenspeechModel.__init()__, it reported an error.

@yinruiqing
Copy link

I think you can override on_save_checkpoint or on_load_checkpoint like what I did to solve this bug. But it is not very elegant.

@wuxiuzhi738
Copy link

I have the same problem, and there are many problems need to modify when I use (configs=configs, ......,tokenizer=None) . Im not sure thats right, but no errors .

@OleguerCanal
Copy link

Hi, I'm trying to load an already-trained model doing something like this:

model = MODEL_REGISTRY[configs.eval.model_name]
model = model.load_from_checkpoint(configs.eval.checkpoint_path, configs=configs, tokenizer=tokenizer)

However, I get this error (for all the layers):

RuntimeError: Error(s) in loading state_dict for ConformerLSTMModel:
	Unexpected key(s) in state_dict: "encoder.conv_subsample.conv.sequential.0.weight ...

Have you guys managed to load a pre-trained model @wuxiuzhi738 @yinruiqing ? Am I doing something wrong??

Thank you a lot

@OleguerCanal
Copy link

Just realized that the model didn't have those layers because the build_model() method was not being called. I solved it calling it in the __init__ and now it seems to work well.

@panmareksadowski
Copy link

Hi,

I made fixes based on the above tips and evaluation works, but I'm getting much worse results on the validation data set than during training on the same validation dataset. Is it using somehow ground truth during training validation evaluation that causes better results during training validation?

upskyy added a commit that referenced this issue Mar 1, 2022
upskyy added a commit that referenced this issue Mar 1, 2022
Update evaluation codes (Fixes #86)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BUG Something isn't working QUESTION Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants