-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Closed
Labels
Description
my rx 560 actually supported in macos (mine is hackintosh macos ventura 13.4), but when i try to run llamacpp , it cant utilize mps. im already compile it with LLAMA_METAL=1 make but when i run this command:
./main -m ./models/falcon-7b-Q4_0-GGUF.gguf -n 128 -ngl 1
it error:
ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x7fcb478145f0 | th_max = 768 | th_width = 64
ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x7fcb47814dd0 | th_max = 1024 | th_width = 64
ggml_metal_init: loaded kernel_mul_mm_f16_f32 0x0 | th_max = 0 | th_width = 0
ggml_metal_init: load pipeline error: Error Domain=CompilerError Code=2 "SC compilation failure
There is a call to an undefined label" UserInfo={NSLocalizedDescription=SC compilation failure
There is a call to an undefined label}
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model './models/falcon-7b-Q4_0-GGUF.gguf'
main: error: unable to load model
please make it compatible because i can run stable diffusioon on macos with this gpu (the mps is work)
lin72h, mfchiz, ssainz, ZacharyDK, phronmophobic and 2 morelin72h, mfchiz, devYonz, PacoDu, mayulu and 2 more