You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "python3.11/site-packages/transformers/utils/hub.py", line 517, in cached_files
raise EnvironmentError(
OSError: EuroBERT/EuroBERT-610m does not appear to have a file named ..processing_utils.py. Checkout 'https://huggingface.co/EuroBERT/EuroBERT-610m/tree/main'for available files.
Who can help?
No response
Information
The official example scripts
My own modified scripts
Tasks
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "EuroBERT/EuroBERT-210m"
tokenizer = AutoTokenizer.from_pretrained(model_id, local_only=True)
model = AutoModelForMaskedLM.from_pretrained(model_id, local_only=True)
text = "The capital of France is <|mask|>."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
Hi @panwpalo, the code for EuroBERT is contained in the EuroBERT repo, not in transformers! I checked and they merged a fix about 4 hours ago that should hopefully resolve this problem.
System Info
It looks like the author introduced a new bug with the new update to Huggingface. (Updates was added 3 hours prior to this post)
the recent update to the Hugging
Could not locate the configuration_eurobert.py inside
File "python3.11/site-packages/transformers/utils/hub.py", line 517, in cached_files
raise EnvironmentError(
OSError: EuroBERT/EuroBERT-610m does not appear to have a file named ..processing_utils.py. Checkout 'https://huggingface.co/EuroBERT/EuroBERT-610m/tree/main'for available files.
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "EuroBERT/EuroBERT-210m"
tokenizer = AutoTokenizer.from_pretrained(model_id, local_only=True)
model = AutoModelForMaskedLM.from_pretrained(model_id, local_only=True)
text = "The capital of France is <|mask|>."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
To get predictions for the mask:
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
Predicted token: Paris
Expected behavior
Loading the model.
The text was updated successfully, but these errors were encountered: