You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to train some models from scratch using different BERT embeddings. I've taken all of the model identifiers from https://huggingface.co/models, but when I run bash scripts/train.sh my_config, I get the following error:
OSError: Can't load config for 'alvaroalon2/biobert_chemical_ner'. Make sure that:
- 'alvaroalon2/biobert_chemical_ner' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'alvaroalon2/biobert_chemical_ner' is the correct path to a directory containing a config.json file
This happens for all the models that I chose, including models from different users.
Have you run into this/do you have any idea what the issue might be? I took a look at the documentation for allennlp train, but I didn't see anything immediately informative.
EDIT: I also tried instantiating a tokenizer using these models with the huggingface API, and that works fine.
Thanks!
Serena
The text was updated successfully, but these errors were encountered:
Closing this because I believe this was due to torch not being compiled with CUDA capabilities; I ran into this same error when using Huggingface directly and it was solved once I ran
Hi again!
I'm trying to train some models from scratch using different BERT embeddings. I've taken all of the model identifiers from https://huggingface.co/models, but when I run
bash scripts/train.sh my_config
, I get the following error:This happens for all the models that I chose, including models from different users.
Have you run into this/do you have any idea what the issue might be? I took a look at the documentation for
allennlp train
, but I didn't see anything immediately informative.EDIT: I also tried instantiating a tokenizer using these models with the huggingface API, and that works fine.
Thanks!
Serena
The text was updated successfully, but these errors were encountered: