New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
report error while test train code #116
Comments
'checkpoint' is a dict with keys: 'epoch', 'global_step', 'checkpoint_callback_best' and so on, but the keys don't have 'hparams_type' and 'hparams'. That's why the training code report ValueError. But i don't know how to fix it. |
after the training of each epoch, the checkpoint will be saved, and pytorch-lightning will check if the hparams_type of checkpoint (got from model.hparams) is 'dict'. But it is 'DictConfig'. So the code turns to ValueError. change pytorch_lightning/trainer/training_io.py 348 to:
and build pytorch-lighning from source, problem solved. |
I know this shouldn't be the solution, it will be appreciated if somebody can help me to change the type of |
seems the problem of pytorch-lightning has not been solved yet. https://github.com/PyTorchLightning/pytorch-lightning/issues/2027 |
Hi, can you try #117 to see if that fixes it? |
Thanks for reply, i'll try. |
report error:
The text was updated successfully, but these errors were encountered: