-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen2' #4457
Comments
Same here but with 0.1.38. Same configuration beside the CPU, mine is AMD. |
Getting the same issue. OS: Windows 11 pro |
Is there a solution being worked on for this problem? Or an easy way to get around the problem? |
@xdfnet can you share your server log? |
I assume my server logs are similar if you want to take a look at mine until you get a response from them. The model gets created just fine. I am also on windows, Ollama 0.1.38.
|
Same issue here when trying to run new jina embedding (jina/jina-embeddings-v2-base-de:latest) on latest Ollama Win11 (updated today).
|
The two logs shared seem to be unsupported model architectures. @xdfnet please let us know if your logs are different and I'll adjust the issue accordingly. I believe jina-bert-v2 will be covered by #3747 @ahuguenard-logility I can't tell what model you were trying to load from the log. If it's not already in covered by a model request issue go ahead and file a new issue so we can track it. |
|
Same here. Ollama 0.1.138. Windows 11.
phi3:mini (4K context) runs fine. Someone on discord mentioned 128K version may use "LongRoPE" which is not supported by ollama yet. server.log does not contain any relevant info:
128K model does not work on Oracle Linux 9 either - but with different error (again 4K model works fine):
Linux specs: |
Getting this error on all Phi3 128 models that I've tried -- mini and medium. Pull is fine, run generates error. Let me know if you want log. Oh--Windows 10; latest Ollama. |
Facing the same error here when running phi3:14b-medium-128k-instruct-q4_0 |
I get the same error with phi3:14b-medium-128k-instruct-q4_1 on Windows 11. |
So Ollam can't run 128k now. Is there any update coming soon? |
Is there another issue under which we should be reporting this? No replies here, and the title has changed to something else. |
Same problem ollama run phi3:14b-medium-128k-instruct-q2_K on windows 11 |
The new version (0.1.39) fixed the issue, so thanks to the team! |
I do also confim that this has been fixed for me in 0.1.39 on both Windows 11 and Linux in the configs mentioned in my post above. The 128K phi3 model works for me now. |
What is the issue?
OS
Windows
GPU
Nvidia
CPU
Intel
Ollama version
e.g.,0.1.37
The text was updated successfully, but these errors were encountered: