New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could not load model deepset/minilm-uncased-squad2 #16849
Comments
I suspect the cause of this is that the |
Exactly that. And looking at the error
I can tell you that for some reason your environmnent could not see |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
I am still having this issue with model I copied the sample code from the example. ERROR: |
What hardware do you have ? Loading You need to do some sharding, either using |
I am on MacBook Air, Apple M2, 16GB RAM, 500GB+ disk available. Do you have the code samples for accelerate or TP sharding? |
This should be enough for accelerate. |
Thanks @Narsil. Still getting an error
Here is my code:
Could this be internet bandwidth? |
That's the issue, but I'm not sure what's happening Can you try : model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
tokenizer.save_pretrained("./model/")
model = AutoModelForCausalLM.from_pretrained(model=model, trust_remote_code=True, device_map="auto",torch_dtype=torch.float32,)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
) Try removing the |
System Info
Who can help?
@Rocketknight1, @LysandreJik,@Narsil
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
model_checkpoint = "deepset/minilm-uncased-squad2"
device = -1
model_checkpoint = pipeline('question-answering', model=model_checkpoint,
tokenizer=model_checkpoint,
device=device)
Expected behavior
The text was updated successfully, but these errors were encountered: