-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when loading trained model #1827
Comments
I think the problem is that it serializes a path specific to the machine it was trained on. I am seeing a similar error. What transformers version are you running? I think it might work with the current transformers. |
Correction: I think the error persists even with the most current version. |
…tion GH-1827: fix deserialization issues in transformer tokenizers
GH-1827: Pretrained TARS model and small fixes
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
When I load a model trained (on a different machine) with TransformerDocumentEmbeddings, the script fails with the following error:
Not found: "/root/.cache/torch/transformers/0c370616ddfc06067c0634160f749c2cf9d8da2c50e03a2617ce5841c8df3b1d.309f0c29486cffc28e1e40a2ab0ac8f500c203fe080b95f820aa9cb58e5b84ed": No such file or directory Error #2
Apparently, it's looking for some file in cache from transformers, even if I expected the model to contain everything needed for the prediction.
The workaround is to call again, before the loading,
_ = TransformerDocumentEmbeddings('xlm-roberta-base')
but you need to know which pre-trained model was used in the training, in this case 'xlm-roberta-base'.
The text was updated successfully, but these errors were encountered: