You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following code that would be used to fine tune the Google Bert model gives an error: from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant
The text was updated successfully, but these errors were encountered:
This is caused by a transformer dependency chardet not being installed. Also, the version of transformers is old 2.x instead of 4.x. Likely, another dependency of nnunet is causing the version to be reduced. Time to split off the nlp environment from nnunet!
Additionally, going to switch to yml files to control dependencies as I do not need to maintain compatibility with requirements.txt files and procedurally generate the ymls unlike in previous projects.
Dependency issues resolved. Shared dependencies (like huge Pytorch) are now installed via a requirements.txt to the base environment. Separate conda envs are created with yml files. If those cannot install some of the dependencies, the run_setup_all.py script will fallback to pip.
The following code that would be used to fine tune the Google Bert model gives an error:
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant
The text was updated successfully, but these errors were encountered: