Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama not utilizing AMD GPU through METAL #5071

Closed
dbl001 opened this issue Jun 15, 2024 · 1 comment
Closed

ollama not utilizing AMD GPU through METAL #5071

dbl001 opened this issue Jun 15, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@dbl001
Copy link

dbl001 commented Jun 15, 2024

What is the issue?

Here's my build command:

% OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_F16C=on -DLLAMA_FMA=on -DLLAMA_METAL=on -DLLAMA_METAL_EMBED_LIBRARY=on -DGGML_USE_METAL=on -DLLAMA_METAL_COMPILE_SERIALIZED=1" go generate -v ./...

The go script subsequently turns -DLLAMA_METAL=off

+ cmake -S ../llama.cpp -B ../build/darwin/x86_64/cpu_avx2 -DCMAKE_OSX_DEPLOYMENT_TARGET=11.3 -DLLAMA_METAL_MACOSX_VERSION_MIN=11.3 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_METAL_EMBED_LIBRARY=on -DCMAKE_SYSTEM_PROCESSOR=x86_64 -DCMAKE_OSX_ARCHITECTURES=x86_64 -DLLAMA_METAL=off -DLLAMA_NATIVE=off -DLLAMA_ACCELERATE=on -DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_AVX512=off -DLLAMA_FMA=on -DLLAMA_F16C=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off

Finally, the server runs without utilizing the GPU.

 % ollama serve
2024/06/15 10:36:43 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/Users/davidlaxer/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-06-15T10:36:43.742-07:00 level=INFO source=images.go:725 msg="total blobs: 28"
time=2024-06-15T10:36:43.743-07:00 level=INFO source=images.go:732 msg="total unused blobs removed: 0"
time=2024-06-15T10:36:43.744-07:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.44)"
time=2024-06-15T10:36:43.744-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/3n/56fpv14n4wj0c1l1sb106pzw0000gn/T/ollama2746628305/runners
time=2024-06-15T10:36:43.770-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cpu cpu_avx]"
time=2024-06-15T10:36:43.770-07:00 level=INFO source=types.go:71 msg="inference compute" id="" library=cpu compute="" driver=0.0 name="" total="128.0 GiB" available="0 B"
time=2024-06-15T10:41:36.771-07:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=0 memory.available="0 B" memory.required.full="4.6 GiB" memory.required.partial="794.5 MiB" memory.required.kv="256.0 MiB" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-06-15T10:41:36.772-07:00 level=INFO source=server.go:341 msg="starting llama server" cmd="/var/folders/3n/56fpv14n4wj0c1l1sb106pzw0000gn/T/ollama2746628305/runners/cpu_avx2/ollama_llama_server --model /Users/davidlaxer/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 63042"
time=2024-06-15T10:41:36.780-07:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-06-15T10:41:36.780-07:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
time=2024-06-15T10:41:36.780-07:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=3051 commit="5921b8f0" tid="0x7ff85e144fc0" timestamp=1718473296
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x7ff85e144fc0" timestamp=1718473296 total_threads=16
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="63042" tid="0x7ff85e144fc0" timestamp=1718473296
time=2024-06-15T10:41:37.032-07:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /Users/davidlaxer/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 1.5928 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW) 
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors:        CPU buffer size =  4437.80 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
llama_new_context_with_model:        CPU compute buffer size =   258.50 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="0x7ff85e144fc0" timestamp=1718473304
time=2024-06-15T10:41:44.296-07:00 level=INFO source=server.go:572 msg="llama runner started in 7.52 seconds"
[GIN] 2024/06/15 - 10:41:44 | 200 |  9.093560734s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:41:54 | 200 |  1.154057317s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:42:35 | 200 | 40.688860055s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:42:41 | 200 |  6.229453908s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:42:43 | 200 |  1.270069572s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:43:23 | 200 | 40.445274886s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:43:29 | 200 |   5.92720864s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:43:31 | 200 |  1.186419337s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:44:11 | 200 | 40.475555077s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:44:17 | 200 |  6.143890785s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:44:19 | 200 |  1.327419018s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:44:59 | 200 | 40.358735272s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:45:05 | 200 |  5.842486079s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:45:06 | 200 |  1.151830787s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:45:45 | 200 | 38.130374809s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:45:50 | 200 |  5.863281373s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:45:52 | 200 |  763.567512ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:46:16 | 200 | 24.464886509s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:46:17 | 200 |  844.612204ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:46:25 | 200 |  7.366777251s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:46:27 | 200 |  1.314771295s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:46:45 | 200 | 18.025285278s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:46:47 | 200 |  1.448278338s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:47:26 | 200 | 38.918308755s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:47:45 | 200 | 18.653427075s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:47:47 | 200 |  1.097321882s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:48:29 | 200 |  41.37452429s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:48:32 | 200 |  1.331141018s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:49:11 | 200 | 39.111446616s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:49:31 | 200 | 20.771630418s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:49:33 | 200 |  1.171729854s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:50:14 | 200 | 40.365819016s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:50:17 | 200 |  1.213320125s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:50:35 | 200 | 18.183581597s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:50:38 | 200 |  1.575906212s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:51:17 | 200 | 39.023216091s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:51:37 | 200 | 19.720073759s |       127.0.0.1 | POST     "/api/embeddings"

If I run

% ./main -m /Users/davidlaxer/llama.cpp/models/7B/ggml-model-q4_0.gguf -n 128 -ngl 1

the AMD GPU is detected

ggml_metal_init: allocating
ggml_metal_init: found device: AMD Radeon Pro 5700 XT
ggml_metal_init: picking default device: AMD Radeon Pro 5700 XT
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/davidlaxer/ollama/llm/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name:   AMD Radeon Pro 5700 XT
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = false
ggml_metal_init: hasUnifiedMemory              = false
ggml_metal_init: recommendedMaxWorkingSetSize  = 17163.09 MB
ggml_metal_init: skipping kernel_mul_mm_f32_f32                    (not supported)
ggml_metal_init: skipping kernel_mul_mm_f16_f32                    (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_0_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_1_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_q5_0_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_q5_1_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_q8_0_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_q2_K_f32                   (not supported)
...

OS

macOS

GPU

AMD

CPU

Intel

Ollama version

0.2.1

@dbl001 dbl001 added the bug Something isn't working label Jun 15, 2024
@dhiltgen
Copy link
Collaborator

metal/GPU support for intel macs is being tracked via #1016 and community PRs are welcome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants