Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen2' #4457

Closed
xdfnet opened this issue May 15, 2024 · 17 comments
Labels
model request Model requests

Comments

@xdfnet
Copy link

xdfnet commented May 15, 2024

What is the issue?

D:\llama.cpp>ollama create eduaigc -f modelfile
transferring model data
using existing layer sha256:28ce318a0cda9dac3b5561c944c16c7e966b07890bed5bb12e122646bc8d71c4
creating new layer sha256:58353639a7c4b7529da8c5c8a63e81c426f206bab10cf82e4b9e427f15a466f8
creating new layer sha256:1da117d6723df114af0d948b614cae0aa684875e2775ca9607d23e2e0769651d
creating new layer sha256:9297f08dd6c6435240b5cddc93261e8a159aa0fecf010de4568ec2df2417bdb2
creating new layer sha256:14d7a26fe5b8e2168e038646c5fb6b0048e27c33628abda8d92ebfed0f369b9f
writing manifest
success

D:\llama.cpp>ollama run eduaigc
Error: llama runner process has terminated: exit status 0xc0000409

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

e.g.,0.1.37

@xdfnet xdfnet added the bug Something isn't working label May 15, 2024
@devilteo911
Copy link

Same here but with 0.1.38. Same configuration beside the CPU, mine is AMD.

@NAME0x0
Copy link

NAME0x0 commented May 16, 2024

Getting the same issue.

OS: Windows 11 pro
CPU: Intel
GPU: AMD
Ollama: 0.1.38

@ahuguenard-logility
Copy link

Is there a solution being worked on for this problem? Or an easy way to get around the problem?

@dhiltgen dhiltgen self-assigned this May 21, 2024
@dhiltgen
Copy link
Collaborator

@xdfnet can you share your server log?

@ahuguenard-logility
Copy link

ahuguenard-logility commented May 21, 2024

I assume my server logs are similar if you want to take a look at mine until you get a response from them.

The model gets created just fine. I am also on windows, Ollama 0.1.38.

time=2024-05-21T12:44:25.732-04:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=14 memory.available="7.0 GiB" memory.required.full="14.8 GiB" memory.required.partial="6.9 GiB" memory.required.kv="256.0 MiB" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-21T12:44:25.748-04:00 level=INFO source=server.go:320 msg="starting llama server" cmd="C:\\Users\\Austin.Huguenard\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\Austin.Huguenard\\.ollama\\models\\blobs\\sha256-ee8e591cb924fe148385fd8acd41c2d9ae8fadb5107f9050c460fae1e5269420 --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 57701"
time=2024-05-21T12:44:25.815-04:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-05-21T12:44:25.815-04:00 level=INFO source=server.go:504 msg="waiting for llama runner to start responding"
time=2024-05-21T12:44:25.816-04:00 level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=2770 commit="952d03d" tid="33844" timestamp=1716309865
INFO [wmain] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="33844" timestamp=1716309865 total_threads=8
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="57701" tid="33844" timestamp=1716309865
llama_model_load: error loading model: tensor 'blk.23.ffn_down.weight' data is not within the file bounds, model is corrupted or incomplete
llama_load_model_from_file: exception loading model
time=2024-05-21T12:44:26.287-04:00 level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server not responding"
time=2024-05-21T12:44:26.541-04:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "

@SD-fan
Copy link

SD-fan commented May 22, 2024

Same issue here when trying to run new jina embedding (jina/jina-embeddings-v2-base-de:latest) on latest Ollama Win11 (updated today).

llama_model_loader: - kv   0:                       general.architecture str              = jina-bert-v2
llama_model_loader: - kv   1:                               general.name str              = jina-embeddings-v2-base-de
llama_model_loader: - kv   2:                   jina-bert-v2.block_count u32              = 12
llama_model_loader: - kv   3:                jina-bert-v2.context_length u32              = 8192
llama_model_loader: - kv   4:              jina-bert-v2.embedding_length u32              = 768
llama_model_loader: - kv   5:           jina-bert-v2.feed_forward_length u32              = 3072
llama_model_loader: - kv   6:          jina-bert-v2.attention.head_count u32              = 12
llama_model_loader: - kv   7:  jina-bert-v2.attention.layer_norm_epsilon f32              = 0.000000
llama_model_loader: - kv   8:                          general.file_type u32              = 1
llama_model_loader: - kv   9:              jina-bert-v2.attention.causal bool             = false
llama_model_loader: - kv  10:                  jina-bert-v2.pooling_type u32              = 1
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                         tokenizer.ggml.pre str              = jina-v2-de
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,61056]   = ["<s>", "<pad>", "</s>", "<unk>", "<m...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,61056]   = [3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,60795]   = ["e r", "e n", "i n", "Ġ a", "c h", ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  19:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  21:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.mask_token_id u32              = 4
llama_model_loader: - kv  23:            tokenizer.ggml.token_type_count u32              = 2
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  111 tensors
llama_model_loader: - type  f16:   85 tensors
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'jina-bert-v2'
llama_load_model_from_file: exception loading model
time=2024-05-22T17:50:34.260+02:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "```

@dhiltgen dhiltgen added model request Model requests and removed bug Something isn't working windows labels May 23, 2024
@dhiltgen dhiltgen removed their assignment May 23, 2024
@dhiltgen
Copy link
Collaborator

The two logs shared seem to be unsupported model architectures. @xdfnet please let us know if your logs are different and I'll adjust the issue accordingly.

I believe jina-bert-v2 will be covered by #3747 @ahuguenard-logility I can't tell what model you were trying to load from the log. If it's not already in covered by a model request issue go ahead and file a new issue so we can track it.

@xdfnet
Copy link
Author

xdfnet commented May 24, 2024

The two logs shared seem to be unsupported model architectures. @xdfnet please let us know if your logs are different and I'll adjust the issue accordingly.

I believe jina-bert-v2 will be covered by #3747 @ahuguenard-logility I can't tell what model you were trying to load from the log. If it's not already in covered by a model request issue go ahead and file a new issue so we can track it.

[GIN] 2024/05/24 - 21:26:53 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/05/24 - 21:26:53 | 200 |       571.9µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/05/24 - 21:26:53 | 200 |      1.0596ms |       127.0.0.1 | POST     "/api/show"
time=2024-05-24T21:26:55.475+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="14.9 GiB" memory.required.full="3.5 GiB" memory.required.partial="3.5 GiB" memory.required.kv="800.0 MiB" memory.weights.total="2.0 GiB" memory.weights.repeating="1.7 GiB" memory.weights.nonrepeating="304.3 MiB" memory.graph.full="301.8 MiB" memory.graph.partial="606.0 MiB"
time=2024-05-24T21:26:55.476+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="14.9 GiB" memory.required.full="3.5 GiB" memory.required.partial="3.5 GiB" memory.required.kv="800.0 MiB" memory.weights.total="2.0 GiB" memory.weights.repeating="1.7 GiB" memory.weights.nonrepeating="304.3 MiB" memory.graph.full="301.8 MiB" memory.graph.partial="606.0 MiB"
time=2024-05-24T21:26:55.486+08:00 level=INFO source=server.go:320 msg="starting llama server" cmd="C:\\Users\\xdfnet\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\xdfnet\\.ollama\\models\\blobs\\sha256-a7ba5e53faabca8196fbd2b75d07f7fd968d093be73618cc38ac728bf826ebe8 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --parallel 1 --port 51906"
time=2024-05-24T21:26:55.491+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-05-24T21:26:55.491+08:00 level=INFO source=server.go:504 msg="waiting for llama runner to start responding"
time=2024-05-24T21:26:55.492+08:00 level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=2770 commit="952d03d" tid="6324" timestamp=1716557215
INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="6324" timestamp=1716557215 total_threads=16
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="51906" tid="6324" timestamp=1716557215
llama_model_loader: loaded meta data with 21 key-value pairs and 483 tensors from C:\Users\xdfnet\.ollama\models\blobs\sha256-a7ba5e53faabca8196fbd2b75d07f7fd968d093be73618cc38ac728bf826ebe8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = xdfnet
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 40
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 2560
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 6912
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 20
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 20
llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 5000000.000000
llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  201 tensors
llama_model_loader: - type q4_0:  281 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
llama_load_model_from_file: exception loading model
time=2024-05-24T21:26:56.055+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
[GIN] 2024/05/24 - 21:26:56 | 500 |     2.433771s |       127.0.0.1 | POST     "/api/chat"

@dhiltgen dhiltgen changed the title Error: llama runner process has terminated: exit status 0xc0000409 error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen2' May 25, 2024
@kozuch
Copy link

kozuch commented May 25, 2024

Same here. Ollama 0.1.138. Windows 11.

ollama run phi3:3.8-mini-128k-instruct-q4_0
Error: llama runner process has terminated: exit status 0xc0000409

phi3:mini (4K context) runs fine. Someone on discord mentioned 128K version may use "LongRoPE" which is not supported by ollama yet.

server.log does not contain any relevant info:

time=2024-05-25T15:23:29.666+02:00 level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA RTX A2000 Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.28 MiB
llm_load_tensors: offloading 19 repeating layers to GPU
llm_load_tensors: offloaded 19/41 layers to GPU
llm_load_tensors: CPU buffer size = 4904.04 MiB
llm_load_tensors: CUDA0 buffer size = 2244.00 MiB
...........................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 210.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 190.00 MiB
llama_new_context_with_model: KV self size = 400.00 MiB, K (f16): 200.00 MiB, V (f16): 200.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 233.36 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 14.01 MiB
llama_new_context_with_model: graph nodes = 1606
llama_new_context_with_model: graph splits = 172
INFO [wmain] model loaded | tid="35136" timestamp=1716643413
time=2024-05-25T15:23:33.765+02:00 level=INFO source=server.go:545 msg="llama runner started in 4.36 seconds"
[GIN] 2024/05/25 - 15:23:33 | 200 | 6.1985918s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/25 - 15:24:10 | 200 | 17.1949464s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/25 - 15:25:03 | 200 | 11.5366574s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/25 - 15:25:50 | 200 | 34.4867755s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/25 - 15:34:55 | 200 | 0s | 127.0.0.1 | HEAD

128K model does not work on Oracle Linux 9 either - but with different error (again 4K model works fine):

ollama run phi3:3.8-mini-128k-instruct-q4_0
Error: llama runner process has terminated: signal: aborted (core dumped)

ollama run phi3:14b-medium-128k-instruct-q4_1
Error: llama runner process has terminated: signal: aborted (core dumped)

Linux specs:
AMD® Epyc 7j13 64-core processor × 8
Oracle Linux Server 9.4 64-bit
No GPU, only CPU

@barclaybrown
Copy link

barclaybrown commented May 25, 2024

Getting this error on all Phi3 128 models that I've tried -- mini and medium. Pull is fine, run generates error. Let me know if you want log. Oh--Windows 10; latest Ollama.

@rumourscape
Copy link

Facing the same error here when running phi3:14b-medium-128k-instruct-q4_0

@ChSamaras
Copy link

I get the same error with phi3:14b-medium-128k-instruct-q4_1 on Windows 11.

@zalastone
Copy link

So Ollam can't run 128k now. Is there any update coming soon?

@barclaybrown
Copy link

Is there another issue under which we should be reporting this? No replies here, and the title has changed to something else.

@karasek2510
Copy link

Same problem ollama run phi3:14b-medium-128k-instruct-q2_K on windows 11

@ChSamaras
Copy link

The new version (0.1.39) fixed the issue, so thanks to the team!

@kozuch
Copy link

kozuch commented May 30, 2024

I do also confim that this has been fixed for me in 0.1.39 on both Windows 11 and Linux in the configs mentioned in my post above. The 128K phi3 model works for me now.

@dhiltgen dhiltgen closed this as completed Jun 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
model request Model requests
Projects
None yet
Development

No branches or pull requests