-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use the model with sentence-transformer for inference? #23
Comments
From the error message, it seems that the If you want to encode the sentence by averaging the embeddings of last two layers (last2avg) or the first and the last layers (firstlastavg), you can use the following function in def load_model(model_path: str, last2avg: bool = False, firstlastavg: bool = False):
model = SentenceTransformer(model_path)
if last2avg:
model[1].pooling_mode_mean_tokens = False
model[1].pooling_mode_mean_last_2_tokens = True
model[0].auto_model.config.output_hidden_states = True
if firstlastavg:
model[1].pooling_mode_mean_tokens = False
model[1].pooling_mode_mean_first_last_tokens = True
model[0].auto_model.config.output_hidden_states = True
logging.info("Model successfully loaded")
return model |
Thanks a lot! |
Cannot load the model.
code
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("../../models/consbert/unsup-consert-base-atec_ccks") # the model path
Error message
Traceback (most recent call last):
File "/home/qhd/PythonProjects/GraduationProject/code/preprocess_unlabeled_second/sentence-bert.py", line 16, in
model = SentenceTransformer("../../models/cosbert/unsup-consert-base-atec_ccks")
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py", line 87, in init
modules = self._load_sbert_model(model_path)
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py", line 824, in _load_sbert_model
module = module_class.load(os.path.join(model_path, module_config['path']))
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 123, in load
return Transformer(model_name_or_path=input_path, **config)
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 30, in init
self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path if tokenizer_name_or_path is not None else model_name_or_path, cache_dir=cache_dir, **tokenizer_args)
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 445, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1719, in from_pretrained
return cls._from_pretrained(
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1791, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/transformers/models/bert/tokenization_bert_fast.py", line 177, in init
super().init(
File "/home/qhd/anaconda3/envs/qhdpython39/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 96, in init
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: No such file or directory (os error 2)
The text was updated successfully, but these errors were encountered: