Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'SequenceClassifierOutput' object has no attribute 'last_hidden_state' #28

Closed
Anooyman opened this issue Mar 14, 2024 · 5 comments

Comments

@Anooyman
Copy link

from transformers import AutoModel, AutoTokenizer

# list of sentences
sentences = ['sentence_0', 'sentence_1']

# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-embedding-base_v1')
model = AutoModel.from_pretrained('maidalun1020/bce-embedding-base_v1')

device = 'cpu'  # if no GPU, set "cpu"
model.to(device)

# get inputs
inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(device) for k, v in inputs.items()}

# get embeddings
outputs = model(**inputs_on_device, return_dict=True)
embeddings = outputs.last_hidden_state[:, 0]  # cls pooler
embeddings = embeddings / embeddings.norm(dim=1, keepdim=True)  # normalize

我从hf上将模型下载到本地,运行 embedding 的时候,遇到 error 如下:

AttributeError: 'SequenceClassifierOutput' object has no attribute 'last_hidden_state'

请问应该如何解决,谢谢!

同时我看到有这些log,请问需要关注吗?

Some weights of XLMRobertaForSequenceClassification were not initialized from the model checkpoint at bce-embedding-base_v1 and are newly initialized: ['classifier.dense.bias', 'classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.out_proj.weight']

You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
@shenlei1020
Copy link
Collaborator

transformers版本是多少?

@Anooyman
Copy link
Author

hi @shenlei1020 , transformers 的版本是 4.38.2

@shenlei1020
Copy link
Collaborator

不超过4.37,建议用4.36

@Anooyman
Copy link
Author

Anooyman commented Mar 15, 2024

Some weights of XLMRobertaForSequenceClassification were not initialized from the model checkpoint at bce-embedding-base_v1 and are newly initialized: ['classifier.out_proj.bias', 'classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[8], line 28
     26 # get embeddings
     27 outputs = embedding_model(**inputs_on_device, return_dict=True)
---> 28 embeddings = outputs.last_hidden_state[:, 0]  # cls pooler
     29 embeddings = embeddings / embeddings.norm(dim=1, keepdim=True)  # normalize

AttributeError: 'SequenceClassifierOutput' object has no attribute 'last_hidden_state'

还是同样的 error

@Anooyman
Copy link
Author

hi @shenlei1020 , 可以帮忙看一下吗?已经更换了版本但是依然报错

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants