Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama fails to create models when using IQ quantized GGUFs - Error: invalid file magic #3622

Open
sammcj opened this issue Apr 13, 2024 · 28 comments · May be fixed by #4322
Open

Ollama fails to create models when using IQ quantized GGUFs - Error: invalid file magic #3622

sammcj opened this issue Apr 13, 2024 · 28 comments · May be fixed by #4322
Assignees
Labels
bug Something isn't working

Comments

@sammcj
Copy link
Contributor

sammcj commented Apr 13, 2024

What is the issue?

Creating a Ollama model from a standard IQ quantized GGUF fails with "Error: invalid file magic"

ollama create sammcj/zephyr-orpo-141b-A35b-v0.1:IQ3_XS -f Modelfile-IQ3_XS
transferring model data
creating model layer
Error: invalid file magic

I've tried with pre-built Ollama packages and compiling Ollama from source.

With the output here I am using the latest Ollama built from main.

llama.cpp and lm-studio

  • Running the same GGUF directly with llama.cpp works without issue:
main -m zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf -ngl 99 -p 'tell me a joke'

....(truncated)
Why did the tomato turn red? Because it saw the salad dressing!
  • And it works in LM Studio 0.2.19 without issue.

Model

Seems to happen with all IQ3 based models I've found.

For example, here I've tried with zephyr-orpo-141b-A35b-v0.1 at IQ3_XS

Modelfile

# IQ3_X_S

FROM ./zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf

TEMPLATE """
<|system|>
{{ .System }}<|endoftext|>
<|user|>
{{ .Prompt }}<|endoftext|>
<|assistant|>
"""
PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "</s>"

What did you expect to see?

The model to be successfully imported the same as any non-IQ quant GGUF.

Steps to reproduce

As per above

  1. Download zephyr-orpo-141b-A35b-v0.1.IQ3_XS
  2. You have to join the GGUFs using gguf-split --merge <first gguf file> <output file> as it seems Ollama doesn't support multi-file models (see log below)
  3. Create a basic Modelfile
  4. Run Ollama create with GGUF and Modelfile.

Are there any recent changes that introduced the issue?

I think it's always been a problem, at least whenever I've tried it

OS

macOS

Architecture

arm64

Platform

No response

Ollama version

main, v0.1.31

GPU

Apple

GPU info

96GB M2 Max

CPU

Apple

Other software

Merge multi-part GGUF using gguf-split

samm-mbp ~/.cache/lm-studio/models/MaziyarPanahi/zephyr-orpo-141b-A35b-v0.1-GGUF [1] $ gguf-split --merge zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00001-of-00005.gguf zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf

gguf_merge: zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00001-of-00005.gguf -> zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00001-of-00005.gguf ...ggml_opencl: selecting platform: 'Apple'
ggml_opencl: selecting device: 'Apple M2 Max'
done
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00002-of-00005.gguf done
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00003-of-00005.gguf done
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00004-of-00005.gguf done
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00005-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00001-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00002-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00003-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00004-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00005-of-00005.gguf done

llama.cpp load logs (without Ollama)

main -m zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf -ngl 99 -p 'tell me a joke'
Log start
main: build = 1266 (ab9a3240)
main: built with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.4.0
main: seed  = 1712985554
ggml_opencl: selecting platform: 'Apple'
ggml_opencl: selecting device: 'Apple M2 Max'
llama_model_loader: loaded meta data with 30 key-value pairs and 563 tensors from zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models--HuggingFaceH4--zephyr-orpo-14...
llama_model_loader: - kv   2:                          llama.block_count u32              = 56
llama_model_loader: - kv   3:                       llama.context_length u32              = 65536
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 6144
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 16384
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 48
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                         llama.expert_count u32              = 8
llama_model_loader: - kv  11:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  12:                          general.file_type u32              = 22
llama_model_loader: - kv  13:                           llama.vocab_size u32              = 32000
llama_model_loader: - kv  14:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  24:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                                   split.no u16              = 0
llama_model_loader: - kv  28:                                split.count u16              = 0
llama_model_loader: - kv  29:                        split.tensors.count i32              = 563
llama_model_loader: - type  f32:  113 tensors
llama_model_loader: - type  f16:   56 tensors
llama_model_loader: - type q8_0:  112 tensors
llama_model_loader: - type q5_K:   56 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq3_xxs:  140 tensors
llama_model_loader: - type iq3_s:   85 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 65536
llm_load_print_meta: n_embd           = 6144
llm_load_print_meta: n_head           = 48
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 56
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 6
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 16384
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 65536
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8x22B
llm_load_print_meta: model ftype      = IQ3_XS - 3.3 bpw
llm_load_print_meta: model params     = 140.62 B
llm_load_print_meta: model size       = 54.23 GiB (3.31 BPW)
llm_load_print_meta: general.name     = models--HuggingFaceH4--zephyr-orpo-141b-A35b-v0.1
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 2 '</s>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.77 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 55296.00 MiB, offs =            0
ggml_backend_metal_buffer_from_ptr: allocated buffer, size =   483.48 MiB, offs =  57636012032, (55779.86 / 73728.00)
llm_load_tensors: offloading 56 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 57/57 layers to GPU
llm_load_tensors:        CPU buffer size =    80.57 MiB
llm_load_tensors:      Metal buffer size = 55449.46 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Max
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 77309.41 MB
llama_kv_cache_init:        CPU KV buffer size =   112.00 MiB
llama_new_context_with_model: KV self size  =  112.00 MiB, K (f16):   56.00 MiB, V (f16):   56.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.12 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   134.52 MiB, (55915.88 / 73728.00)
llama_new_context_with_model:      Metal compute buffer size =   134.50 MiB
llama_new_context_with_model:        CPU compute buffer size =    13.01 MiB
llama_new_context_with_model: graph nodes  = 2862
llama_new_context_with_model: graph splits = 114

system_info: n_threads = 8 / 12 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
sampling:
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = -1, n_keep = 1

ollama serve logs

ollama serve
time=2024-04-13T15:27:47.492+10:00 level=INFO source=images.go:812 msg="total blobs: 135"
time=2024-04-13T15:27:47.716+10:00 level=INFO source=images.go:819 msg="total unused blobs removed: 2"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.PullModelHandler (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.GenerateHandler (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.ChatHandler (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.EmbeddingsHandler (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.CreateModelHandler (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.PushModelHandler (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.CopyModelHandler (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.DeleteModelHandler (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.ShowModelHandler (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.CreateBlobHandler (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.HeadBlobHandler (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.ChatHandler (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.ListModelsHandler (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.ListModelsHandler (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-04-13T15:27:47.718+10:00 level=INFO source=routes.go:1139 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-04-13T15:27:47.742+10:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/var/folders/b2/wnpx7gg566l7dq63x0h27r9r0000gn/T/ollama3558789160/runners
time=2024-04-13T15:27:47.759+10:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [metal]"
[GIN] 2024/04/13 - 15:27:50 | 200 |      41.709µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/13 - 15:29:01 | 201 | 44.933516875s |       127.0.0.1 | POST     "/api/blobs/sha256:6f4db7fb502f25cae8604a40e18a35adcb941ca2c7783c9e7f8a7aca41711fff"
[GIN] 2024/04/13 - 15:29:20 | 200 | 19.285965125s |       127.0.0.1 | POST     "/api/create"
@sammcj sammcj added bug Something isn't working needs-triage labels Apr 13, 2024
@mann1x
Copy link
Contributor

mann1x commented Apr 13, 2024

@sammcj IQ3_XS is not supported.

This is the list of the supported quantizations for now in the main release:

const (
	fileTypeF32 uint32 = iota
	fileTypeF16
	fileTypeQ4_0
	fileTypeQ4_1
	fileTypeQ4_1_F16
	fileTypeQ8_0 uint32 = iota + 2
	fileTypeQ5_0
	fileTypeQ5_1
	fileTypeQ2_K
	fileTypeQ3_K_S
	fileTypeQ3_K_M
	fileTypeQ3_K_L
	fileTypeQ4_K_S
	fileTypeQ4_K_M
	fileTypeQ5_K_S
	fileTypeQ5_K_M
	fileTypeQ6_K
	fileTypeIQ2_XXS
	fileTypeIQ2_XS
	fileTypeQ2_K_S
	fileTypeQ3_K_XS
	fileTypeIQ3_XXS
)

@sammcj
Copy link
Contributor Author

sammcj commented Apr 13, 2024

Thanks @mann1x, that's interesting, any idea why that might be?

IQ3_XS seems like a bit of a sweet spot as I think it's usually pretty much as good as IQ4, but still much smaller where IQ3_XXS is a drop.

@mann1x
Copy link
Contributor

mann1x commented Apr 13, 2024

They will be supported in the future, not sure when.
There's not a huge interest because the i-matrix quants are sensibly slower during inference.
And it takes a lot of time to quantise them properly so they are not generally available.
Let's hope.

@sammcj
Copy link
Contributor Author

sammcj commented Apr 14, 2024

Ah, I haven't actually noticed they're that much slower than the K quants, maybe I should try running Q3_K_M instead of IQ3_XS on my Macbook 🤔

@mann1x
Copy link
Contributor

mann1x commented Apr 14, 2024

To be honest anything below Q4 is poor quality, better to pick a smaller model.
There are other formats better suited for 2/3 bit than GGUF with 3 bit very very close to 4 bit.
Very soon they will be the "standard" for small sizes.

@oldmanjk
Copy link

They will be supported in the future, not sure when. There's not a huge interest because the i-matrix quants are sensibly slower during inference. And it takes a lot of time to quantise them properly so they are not generally available. Let's hope.

  • As far as I know, IQ quants are not the same thing as i-matrix quants, which can apply to any of the other quants, like K quants.
  • I think there actually is a huge interest in both i-matrix quants and IQ quants, especially combined.
  • In my testing, IQ quants are no slower than K quants.
  • I don't know what you consider "a lot of time to quantize them properly," but I can quantize them in a few minutes.

@oldmanjk
Copy link

Ah, I haven't actually noticed they're that much slower than the K quants, maybe I should try running Q3_K_M instead of IQ3_XS on my Macbook 🤔

They're not

@oldmanjk
Copy link

To be honest anything below Q4 is poor quality, better to pick a smaller model. There are other formats better suited for 2/3 bit than GGUF with 3 bit very very close to 4 bit. Very soon they will be the "standard" for small sizes.

Do you have any data to support the claim that a smaller model with a higher quant will outperform a larger model with a smaller quant? As long as ollama only supports GGUF, I don't know how "other formats better suited for 2/3 bit" is relevant to this discussion

@oldmanjk
Copy link

oldmanjk commented Apr 14, 2024

+1 to requesting support for the rest of the IQ quants. I'm especially interested in IQ4_NL, personally. An IQ4_NL quant of Command-R with 2K context fits and works on a 24 GiB card. A Q4_K quant of the same goes OOM after about 200 context

@mann1x
Copy link
Contributor

mann1x commented Apr 14, 2024

  • As far as I know, IQ quants are not the same thing as i-matrix quants, which can apply to any of the other quants, like K quants.

I don't know enough to tell for sure, do you have any reference?

https://huggingface.co/Lewdiculous/Eris_7B-GGUF-IQ-Imatrix

From what I understood; the IQ quants are just another format and you can just quantize the model with it but it will be very inefficient and you lose the size reduction advantage.
Or you can create i-matrix, it's not a quantization, but a map for the quantization.
I gave up creating one because it was taking ages on my system...

  • I think there actually is a huge interest in both i-matrix quants and IQ quants, especially combined.

Not right now, there are still problems with the K-quants, more pressing items so not much of a prio for llama.cpp or ollama
I'm very interested personally!

  • In my testing, IQ quants are no slower than K quants.

I didn't test them myself but I've seen benchmarks, not very recent, where the t/s went down from 20-25 to 15-20.
I mean that for a lot.
Are you sure they were quantised with the i-matrix? Because otherwise there's not much speed drop.

  • I don't know what you consider "a lot of time to quantize them properly," but I can quantize them in a few minutes.

I mean to create the i-matrix

Do you have any data to support the claim that a smaller model with a higher quant will outperform a larger model with a smaller quant? As long as ollama only supports GGUF, I don't know how "other formats better suited for 2/3 bit" is relevant to this discussion

ollama uses llama.cpp as backend so anything about llama.cpp is relevant

ggerganov/llama.cpp#545

I never claimed that "a smaller model with a higher quant will outperform a larger model with a smaller quant"
Not sure how you got to this conclusion. Outperform on which metrics?
It's a recommendation, given by everyone. For obvious reasons.

@oldmanjk
Copy link

oldmanjk commented Apr 15, 2024

From what I understood; the IQ quants are just another format and you can just quantize the model with it but it will be very inefficient and you lose the size reduction advantage. Or you can create i-matrix, it's not a quantization, but a map for the quantization. I gave up creating one because it was taking ages on my system...

An IQ quant is a new quantization format for GGUF files. ggerganov/llama.cpp#4773
The i-matrix (importance matrix) is described in the same link. IQ quants don't need i-matrices and i-matrices can be used without IQ quants (on, for example, K-quants). Try using a chunk size of 100 to speed things up.

  • I think there actually is a huge interest in both i-matrix quants and IQ quants, especially combined.

Not right now, there are still problems with the K-quants, more pressing items so not much of a prio for llama.cpp or ollama I'm very interested personally!

There's no point trying to disprove an opinion. All of us are personally interested.

I didn't test them myself but I've seen benchmarks, not very recent, where the t/s went down from 20-25 to 15-20. I mean that for a lot. Are you sure they were quantised with the i-matrix? Because otherwise there's not much speed drop.

Positive. I created the i-matrices and quantized the models myself. I've since read there might be slowdown while offloading, which I'm not doing. On my GPU, performance is the same.

I mean to create the i-matrix

I mean the whole process. It depends on the model, of course. A 32B takes a few minutes. A 72B takes a couple hours, but I don't think I can realistically run a model that big. Smaller models would probably be seconds.

ollama uses llama.cpp as backend so anything about llama.cpp is relevant

Last I checked, llama.cpp only uses GGUF too, so my point stands. I see you've linked to a thread about a conversion script. That converts to GGUF. So we're back to GGUF again. Starting to smell like bad faith around here.

I never claimed that "a smaller model with a higher quant will outperform a larger model with a smaller quant"

"To be honest anything below Q4 is poor quality, better to pick a smaller model." - You.

Not sure how you got to this conclusion.

See above.

Outperform on which metrics?

You tell me. That's my point.

It's a recommendation, given by everyone. For obvious reasons.

So you say X, I ask for evidence of X, you claim not to have said X, then say X again, claim everyone says X and that it's obvious why they say X, again, without evidence. From what I've read, most people actually say Y, also without evidence. That's why I asked for evidence. Because I'd like to know. Actual benchmarks would be nice. Much better than empty claims.

I'm done arguing with you, "for obvious reasons."

@sammcj I was trying to defend your point. Maybe you missed that. Oh well

@mann1x
Copy link
Contributor

mann1x commented Apr 15, 2024

I'm done arguing with you, "for obvious reasons."

I'm done too arguing, there's really no obvious reason why you should attack me or defend @sammcj...
Weird!

But thanks for all the useful information and the tip about che chunk size, I'll try that!

@mann1x
Copy link
Contributor

mann1x commented Apr 15, 2024

Made a PR to support the latest IQ formats: #3657

IQ4_NL is now fixed.

They work pretty nice for me but only on the GPU.
Definitely not recommended running on CPU with a Ryzen.

With the latest llama.cpp I can create the imatrix.dat for Starling-LM-7B-beta in less than 2 minutes and the quantization is barely slower than the normal one,

Made a quick benchmark, Ryzen 5950X and RTX 3090

Be careful with IQ3_XXS, it's a CPU killer :)

Q4_0 GPU
total duration: 8.7246425s
load duration: 3.2494ms
prompt eval count: 31 token(s)
prompt eval duration: 226.885ms
prompt eval rate: 136.63 tokens/s
eval count: 842 token(s)
eval duration: 8.486696s
eval rate: 99.21 tokens/s

Q4_0 CPU [66°C]
total duration: 1m29.3337892s
load duration: 1.8636889s
prompt eval count: 31 token(s)
prompt eval duration: 1.382345s
prompt eval rate: 22.43 tokens/s
eval count: 852 token(s)
eval duration: 1m26.071812s
eval rate: 9.90 tokens/s

IQ4_XS GPU
total duration: 10.3567447s
load duration: 17.8231ms
prompt eval count: 31 token(s)
prompt eval duration: 294.5ms
prompt eval rate: 105.26 tokens/s
eval count: 826 token(s)
eval duration: 10.035686s
eval rate: 82.31 tokens/s

IQ4_XS CPU [70°C]
total duration: 11m42.2906152s
load duration: 2.1723736s
prompt eval count: 31 token(s)
prompt eval duration: 21.312776s
prompt eval rate: 1.45 tokens/s
eval count: 911 token(s)
eval duration: 11m18.790198s
eval rate: 1.34 tokens/s

IQ3_XXS GPU
total duration: 9.0115311s
load duration: 3.2502ms
prompt eval count: 23 token(s)
prompt eval duration: 266.301ms
prompt eval rate: 86.37 tokens/s
eval count: 791 token(s)
eval duration: 8.735132s
eval rate: 90.55 tokens/s

IQ3_XXS CPU [80°C]
total duration: 6m20.6749411s
load duration: 2.2706954s
prompt eval count: 852 token(s)
prompt eval duration: 1.070351s
prompt eval rate: 796.00 tokens/s
eval count: 806 token(s)
eval duration: 6m17.320994s
eval rate: 2.14 tokens/s

IQ3_S GPU
total duration: 7.2284989s
load duration: 2.4185ms
prompt eval count: 30 token(s)
prompt eval duration: 258.932ms
prompt eval rate: 115.86 tokens/s
eval count: 636 token(s)
eval duration: 6.959609s
eval rate: 91.38 tokens/s

IQ2_XXS GPU
total duration: 7.5380617s
load duration: 3.1441ms
prompt eval count: 30 token(s)
prompt eval duration: 350.5ms
prompt eval rate: 85.59 tokens/s
eval count: 588 token(s)
eval duration: 7.177771s
eval rate: 81.92 tokens/s

IQ2_XS GPU
total duration: 7.5052537s
load duration: 1.5911ms
prompt eval count: 30 token(s)
prompt eval duration: 61.26ms
prompt eval rate: 489.72 tokens/s
eval count: 724 token(s)
eval duration: 7.427141s
eval rate: 97.48 tokens/s

IQ2_S GPU
total duration: 8.4011733s
load duration: 2.0952ms
prompt eval count: 30 token(s)
prompt eval duration: 229.61ms
prompt eval rate: 130.66 tokens/s
eval count: 789 token(s)
eval duration: 8.160233s
eval rate: 96.69 tokens/s

IQ1_S GPU
total duration: 6.5367633s
load duration: 2.6285ms
prompt eval count: 30 token(s)
prompt eval duration: 384.96ms
prompt eval rate: 77.93 tokens/s
eval count: 638 token(s)
eval duration: 6.14229s
eval rate: 103.87 tokens/s

IQ4_NL GPU
total duration: 12.0501547s
load duration: 2.5946ms
prompt eval count: 30 token(s)
prompt eval duration: 339.041ms
prompt eval rate: 88.48 tokens/s
eval count: 1006 token(s)
eval duration: 11.702335s
eval rate: 85.97 tokens/s

Size of the files:

image

@jukofyork
Copy link

We definitely need IQ4_XS:

https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

But I'm a bit afraid of using this PR in case it buggers up all the imported models if/when the enum order changes... ☹️

@mann1x
Copy link
Contributor

mann1x commented Apr 15, 2024

The enum order doesn't matter, the type is being checked over the tensors t.Kind.
And it didn't mess up my massive library so don't worry :P

func (t Tensor) typeSize() uint64 {
	blockSize := t.blockSize()

	switch t.Kind {
	case 0: // FP32
		return 4
	case 1: // FP16
		return 2
	case 2: // Q4_0
		return 2 + blockSize/2
	case 3: // Q4_1
		return 2 + 2 + blockSize/2
	case 6: // Q5_0
		return 2 + 4 + blockSize/2
	case 7: // Q5_1
		return 2 + 2 + 4 + blockSize/2
	case 8: // Q8_0
		return 2 + blockSize
	case 9: // Q8_1
		return 4 + 4 + blockSize
	case 10: // Q2_K
		return blockSize/16 + blockSize/4 + 2 + 2
	case 11: // Q3_K
		return blockSize/8 + blockSize/4 + 12 + 2
	case 12: // Q4_K
		return 2 + 2 + 12 + blockSize/2
	case 13: // Q5_K
		return 2 + 2 + 12 + blockSize/8 + blockSize/2
	case 14: // Q6_K
		return blockSize/2 + blockSize/4 + blockSize/16 + 2
	case 15: // Q8_K
		return 2 + blockSize + 2*blockSize/16
	case 16: // IQ2_XXS
		return 2 + 2*blockSize/8
	case 17: // IQ2_XS
		return 2 + 2*blockSize/8 + blockSize/32
	case 18: // IQ3_XXS
		return 2 + 3*blockSize/8
	default:
		return 0
	}
}

@jukofyork
Copy link

The enum order doesn't matter, the type is being checked over the tensors t.Kind. And it didn't mess up my massive library so don't worry :P

func (t Tensor) typeSize() uint64 {
	blockSize := t.blockSize()

	switch t.Kind {
	case 0: // FP32
		return 4
	case 1: // FP16
		return 2
	case 2: // Q4_0
		return 2 + blockSize/2
	case 3: // Q4_1
		return 2 + 2 + blockSize/2
	case 6: // Q5_0
		return 2 + 4 + blockSize/2
	case 7: // Q5_1
		return 2 + 2 + 4 + blockSize/2
	case 8: // Q8_0
		return 2 + blockSize
	case 9: // Q8_1
		return 4 + 4 + blockSize
	case 10: // Q2_K
		return blockSize/16 + blockSize/4 + 2 + 2
	case 11: // Q3_K
		return blockSize/8 + blockSize/4 + 12 + 2
	case 12: // Q4_K
		return 2 + 2 + 12 + blockSize/2
	case 13: // Q5_K
		return 2 + 2 + 12 + blockSize/8 + blockSize/2
	case 14: // Q6_K
		return blockSize/2 + blockSize/4 + blockSize/16 + 2
	case 15: // Q8_K
		return 2 + blockSize + 2*blockSize/16
	case 16: // IQ2_XXS
		return 2 + 2*blockSize/8
	case 17: // IQ2_XS
		return 2 + 2*blockSize/8 + blockSize/32
	case 18: // IQ3_XXS
		return 2 + 3*blockSize/8
	default:
		return 0
	}
}

So it's definitely not stored anywhere in Ollama's metadata files (that was my main worry)?

@mann1x
Copy link
Contributor

mann1x commented Apr 15, 2024

So it's definitely not stored anywhere in Ollama's metadata files (that was my main worry)?

Definitely not, the file is parsed every time it's loaded.

@jukofyork
Copy link

So it's definitely not stored anywhere in Ollama's metadata files (that was my main worry)?

Definitely not, the file is parsed every time it's loaded.

Thanks! I'll give it a try later and report back. Hopefully it gets accepted soon.

@oldmanjk
Copy link

oldmanjk commented Apr 16, 2024

@mann1x
I never "attacked" you, nor was I defending @sammcj
Like I said, I defended his point. Thanks for the PR. Are you giving up on IQ4_NL? Should someone else look into it?

@WiSaGaN
Copy link

WiSaGaN commented Apr 16, 2024

According to this table: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

The 8x22B model (which has roughtly 141B parameters, be it WizardLM or not) would have IQ3_XS at 58GB, which may be just the sweet spot for people with 64GB memory (Mac or PC).

@oldmanjk
Copy link

According to this table: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

The 8x22B model (which has roughtly 141B parameters, be it WizardLM or not) would have IQ3_XS at 58GB, which maybe just be the sweet spot for people with 64GB memory (Mac or PC).

If you get that going, would you mind posting performance numbers?

@mann1x
Copy link
Contributor

mann1x commented Apr 16, 2024

Like I said, I defended his point. Thanks for the PR. Are you giving up on IQ4_NL? Should someone else look into it?

Let it go, I don't mind :) It's just a misunderstanding.

I'm not giving up of course! But I'd like to have some help, another pair of eyes.
Just looking at llama.ccp code, I don't see anything obvious.
But I was tired yesterday, maybe today is a better day.

@sammcj
Copy link
Contributor Author

sammcj commented Apr 16, 2024

The 8x22B model (which has roughtly 141B parameters, be it WizardLM or not) would have IQ3_XS at 58GB, which maybe just be the sweet spot for people with 64GB memory (Mac or PC).

Bingo, exactly my use case.

Obviously if it's a lot slower than say Q3_something it may not be worth it but if there's not much in it - definitely a win.

@WiSaGaN
Copy link

WiSaGaN commented Apr 16, 2024

According to this table: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
The 8x22B model (which has roughtly 141B parameters, be it WizardLM or not) would have IQ3_XS at 58GB, which maybe just be the sweet spot for people with 64GB memory (Mac or PC).

If you get that going, would you mind posting performance numbers?

No I haven't got it running yet. I would expect it to be pretty slow on PC using CPU, but Mac with greater memory bandwidth should be pretty usable.

@oldmanjk
Copy link

oldmanjk commented Apr 16, 2024

According to this table: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
The 8x22B model (which has roughtly 141B parameters, be it WizardLM or not) would have IQ3_XS at 58GB, which maybe just be the sweet spot for people with 64GB memory (Mac or PC).

If you get that going, would you mind posting performance numbers?

No I haven't got it running yet. I would expect it to be pretty slow on PC using CPU, but Mac with greater memory bandwidth should be pretty usable.

"If you get that going, would you mind posting performance numbers?"

@mann1x
Copy link
Contributor

mann1x commented Apr 17, 2024

I have updated the PR to fix IQ4_NL support, I will add the benchmark to the table above

@zedmango
Copy link

I have updated the PR to fix IQ4_NL support, I will add the benchmark to the table above

Any chance of getting IQ2M, IQ3XS, IQ3M, IQ4XS, IQ4 added? I really would like those.

@oldmanjk
Copy link

I have updated the PR to fix IQ4_NL support, I will add the benchmark to the table above

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants