-
Notifications
You must be signed in to change notification settings - Fork 25.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't load the model #353
Comments
Strange error. Can you try: import pytorch_pretrained_bert as ppb
assert 'bert-large-cased' in ppb.modeling.PRETRAINED_MODEL_ARCHIVE_MAP Do you have an open internet connection on the server that run the script? |
@thomwolf Is there a way to point to a model on disk? This question seems related enough to daisychain with this issue. :-) |
I noticed that this error happens when you exceed the disk space in the temporary directory while downloading BERT. |
I ran into the same problem. When I used the Chinese pre-training model, it was sometimes good and sometimes bad. |
@thomwolf I've been having the same error, and I received an AssertionError when I try assert 'bert-based-uncased' in bert.modeling.PRETRAINED_MODEL_ARCHIVE_MAP I've tried using both conda install and Pip install to get the package but in both cases I am not able to load any models |
Hi @DuncanCam-Stein, |
@thomwolf @countback |
The network connection check has been relaxed in the now merged #500. These improvements will be included in the next PyPI release (probably next week). In the meantime you can install from |
As @martiansideofthemoon said, I met this error because I didn't have enough space on disk. Check if you can download the file with :
|
@martiansideofthemoon What does that mean if we can download it via wget but not when we use from_pretrained? is it a disk space problem? |
@Hannabrahman |
@colanim Thanks. I figured out it was memory issue on the cache directory. |
@Hannabrahman
how did you solve this issue? |
@raj5287 |
@colanim i have enough disk space since i have downloaded the file using |
@DuncanCam-Stein i have downloaded and placed pytorch_model.bin and bert_config.json in bert_tagger folder but when i am doing |
try to delete cahe file and rerun the command |
I noticed that the error appears when I execute my script in debug mode (in Visual Studio Code). I fixed it by executing the script over the terminal Btw. I got the same problem with the tokenizer and this also fixed it. |
hello,I meet the problem when run the torch bert code 👍 OSError: Can't load weights for 'bert-base-uncased'. Make sure that:
|
@DTW1004 check your network connection. This happens when I'm behind a proxy and SSL/proxy isn't configured appropriately. |
bro,I've been having the same error. and then I try to debug the specific code |
I met the issue and I found the reason is that my server connecting was offline. |
Running into the same issue on AWS Lambda. Neither relative and absolute paths will allow the model to load from pre-trained. |
Here's what I am doing: !wget -q https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz
!tar xf bert-base-multilingual-cased.tar.gz Now, if I do: encoder = TFBertModel.from_pretrained("bert-base-multilingual-cased") I still get: OSError: Can't load config for 'bert-base-multilingual-cased'. Make sure that:
- 'bert-base-multilingual-cased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-multilingual-cased' is the correct path to a directory containing a config.json file |
Here's what I am doing: from transformers import pipeline def corret_sentence(sentence,unmasker): if name=='main': I get: ValueError: Could not load model uer/chinese_roberta_L-2_H-512 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForMaskedLM'>, <class 'transformers.models.bert.modeling_bert.BertForMaskedLM'>). |
how could solve this? did you solve this problem? i am also having sample |
how can I free some disk space. |
Can you please specify which model exactly you downloaded and how you ran the function? Thanks |
The text was updated successfully, but these errors were encountered: