From 82ab0592332866fdfd388d9319b9f3a2f6450f9f Mon Sep 17 00:00:00 2001 From: Li He Date: Sun, 2 Nov 2025 22:05:05 -0800 Subject: [PATCH 1/4] opencl: update docs --- docs/backend/OPENCL.md | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/docs/backend/OPENCL.md b/docs/backend/OPENCL.md index 07146f7102f3d..c943b0033b76b 100644 --- a/docs/backend/OPENCL.md +++ b/docs/backend/OPENCL.md @@ -45,12 +45,14 @@ The llama.cpp OpenCL backend is designed to enable llama.cpp on **Qualcomm Adren |:----------------------:|:--------------------------:| | Q4_0 | Support | | Q6_K | Support, but not optimized | +| Q8_0 | Support | +| MXFP4 | Support | ## Model Preparation -You can refer to the general [*Prepare and Quantize*](README.md#prepare-and-quantize) guide for model prepration. +You can refer to the general [llama-quantize tool](tools/quantize/README.md) for steps to convert a model in Hugging Face safetensor format to GGUF with quantization. -Currently we support `Q4_0` quantization and have optimize for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize`. For example, +Currently we support `Q4_0` quantization and have optimized for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize`. For example, ```sh ./llama-quantize --pure ggml-model-qwen2.5-3b-f16.gguf ggml-model-qwen-3b-Q4_0.gguf Q4_0 @@ -58,6 +60,12 @@ Currently we support `Q4_0` quantization and have optimize for it. To achieve be Since `Q6_K` is also supported, `Q4_0` quantization without `--pure` will also work. However, the performance will be worse compared to pure `Q4_0` quantization. +### MXFP4 Models + +OpenAI gpt-oss models are in MXFP4. The quantized model will be in MXFP4_MOE, a mixture of MXFP4 and Q8_0. +For this quantization, there is no need to specify `--pure`. +For gpt-oss-20b model, you can directly download a quantized GGUF file in MXFP4 from Hugging Face. + ## CMake Options The OpenCL backend has the following CMake options that control the behavior of the backend. @@ -146,10 +154,13 @@ A Snapdragon X Elite device with Windows 11 Arm64 is used. Make sure the followi * Ninja * Visual Studio 2022 * Powershell 7 +* Python Visual Studio provides necessary headers and libraries although it is not directly used for building. Alternatively, Visual Studio Build Tools can be installed instead of the full Visual Studio. +> Note that building using Visual Studio's cl compiler is not supported. Clang must be used. Clang depends on libraries provided by Visual Studio to work. Therefore, Visual Studio must be installed. Alternatively, Visual Studio Build Tools can be installed instead of the full Visual Studio. + Powershell 7 is used for the following commands. If an older version of Powershell is used, these commands may not work as they are. @@ -201,7 +212,9 @@ ninja ## Known Issues -- Currently OpenCL backend does not work on Adreno 6xx GPUs. +- Flash attention does not always improve performance. Disable it for models above 3B. +- Currently OpenCL backend works on A6xx GPUs with recent drivers and compilers (usually found in IoT platforms). + However, it does not work on A6xx GPUs found in phones with old drivers and compilers. ## TODO From a05b025be4c33620f2dceb2982a05363515dd152 Mon Sep 17 00:00:00 2001 From: Li He Date: Mon, 3 Nov 2025 10:16:56 -0800 Subject: [PATCH 2/4] opencl: update docs --- docs/backend/OPENCL.md | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/docs/backend/OPENCL.md b/docs/backend/OPENCL.md index c943b0033b76b..e7f9c5ebfa186 100644 --- a/docs/backend/OPENCL.md +++ b/docs/backend/OPENCL.md @@ -60,11 +60,16 @@ Currently we support `Q4_0` quantization and have optimized for it. To achieve b Since `Q6_K` is also supported, `Q4_0` quantization without `--pure` will also work. However, the performance will be worse compared to pure `Q4_0` quantization. -### MXFP4 Models +### `MXFP4` MoE Models -OpenAI gpt-oss models are in MXFP4. The quantized model will be in MXFP4_MOE, a mixture of MXFP4 and Q8_0. +OpenAI gpt-oss models are MoE models in `MXFP4`. The quantized model will be in `MXFP4_MOE`, a mixture of `MXFP4` and `Q8_0`. For this quantization, there is no need to specify `--pure`. -For gpt-oss-20b model, you can directly download a quantized GGUF file in MXFP4 from Hugging Face. +For gpt-oss-20b model, you can directly [download](https://huggingface.co/ggml-org/gpt-oss-20b-GGUF) the quantized GGUF file in `MXFP4_MOE` from Hugging Face. + +Although it is possible to quantize gpt-oss-20b model in pure `Q4_0`, it is not recommendedsince `MXFP4` has been optimized for MoE while `Q4_0` is not. +Hence, using the default `MXFP4_MOE` quantization will give better performance compared to pure `Q4_0` quantization for this model. + +However, note that the `Q4_0` model found [here](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_0.gguf) is a mixture of `Q4_0`, `Q8_0` and `MXFP4` and gives better performance than `MXFP4_MOE` quantization. ## CMake Options From d90c17404b2254b9566e8b968f379f6587701ab1 Mon Sep 17 00:00:00 2001 From: Li He Date: Mon, 3 Nov 2025 14:06:30 -0800 Subject: [PATCH 3/4] opencl: fix link --- docs/backend/OPENCL.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/backend/OPENCL.md b/docs/backend/OPENCL.md index e7f9c5ebfa186..a53adc0de5135 100644 --- a/docs/backend/OPENCL.md +++ b/docs/backend/OPENCL.md @@ -50,7 +50,7 @@ The llama.cpp OpenCL backend is designed to enable llama.cpp on **Qualcomm Adren ## Model Preparation -You can refer to the general [llama-quantize tool](tools/quantize/README.md) for steps to convert a model in Hugging Face safetensor format to GGUF with quantization. +You can refer to the general [llama-quantize tool](/tools/quantize/README.md) for steps to convert a model in Hugging Face safetensor format to GGUF with quantization. Currently we support `Q4_0` quantization and have optimized for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize`. For example, From 40d20956eccc995413c2eff421d3ac36c9af1d86 Mon Sep 17 00:00:00 2001 From: Li He Date: Tue, 4 Nov 2025 13:51:28 -0800 Subject: [PATCH 4/4] opencl: update doc --- docs/backend/OPENCL.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/docs/backend/OPENCL.md b/docs/backend/OPENCL.md index a53adc0de5135..e52baffdffd31 100644 --- a/docs/backend/OPENCL.md +++ b/docs/backend/OPENCL.md @@ -39,6 +39,9 @@ The llama.cpp OpenCL backend is designed to enable llama.cpp on **Qualcomm Adren | Adreno 830 (Snapdragon 8 Elite) | Support | | Adreno X85 (Snapdragon X Elite) | Support | +> A6x GPUs with a recent driver and compiler are supported; they are usually found in IoT platforms. +However, A6x GPUs in phones are likely not supported due to the outdated driver and compiler. + ## DataType Supports | DataType | Status | @@ -52,7 +55,7 @@ The llama.cpp OpenCL backend is designed to enable llama.cpp on **Qualcomm Adren You can refer to the general [llama-quantize tool](/tools/quantize/README.md) for steps to convert a model in Hugging Face safetensor format to GGUF with quantization. -Currently we support `Q4_0` quantization and have optimized for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize`. For example, +Currently we support `Q4_0` quantization and have optimized for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize` (i.e., make all weights in `Q4_0`). For example, ```sh ./llama-quantize --pure ggml-model-qwen2.5-3b-f16.gguf ggml-model-qwen-3b-Q4_0.gguf Q4_0 @@ -66,10 +69,10 @@ OpenAI gpt-oss models are MoE models in `MXFP4`. The quantized model will be in For this quantization, there is no need to specify `--pure`. For gpt-oss-20b model, you can directly [download](https://huggingface.co/ggml-org/gpt-oss-20b-GGUF) the quantized GGUF file in `MXFP4_MOE` from Hugging Face. -Although it is possible to quantize gpt-oss-20b model in pure `Q4_0`, it is not recommendedsince `MXFP4` has been optimized for MoE while `Q4_0` is not. -Hence, using the default `MXFP4_MOE` quantization will give better performance compared to pure `Q4_0` quantization for this model. +Although it is possible to quantize gpt-oss-20b model in pure `Q4_0` (all weights in `Q4_0`), it is not recommended since `MXFP4` has been optimized for MoE while `Q4_0` is not. In addition, accuracy should degrade with such pure `Q4_0` quantization. +Hence, using the default `MXFP4_MOE` quantization (see the link above) is recommended for this model. -However, note that the `Q4_0` model found [here](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_0.gguf) is a mixture of `Q4_0`, `Q8_0` and `MXFP4` and gives better performance than `MXFP4_MOE` quantization. +> Note that the `Q4_0` model found [here](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_0.gguf) is a mixture of `Q4_0`, `Q8_0` and `MXFP4` and gives better performance than `MXFP4_MOE` quantization. ## CMake Options @@ -217,7 +220,7 @@ ninja ## Known Issues -- Flash attention does not always improve performance. Disable it for models above 3B. +- Flash attention does not always improve performance. - Currently OpenCL backend works on A6xx GPUs with recent drivers and compilers (usually found in IoT platforms). However, it does not work on A6xx GPUs found in phones with old drivers and compilers. @@ -225,3 +228,4 @@ ninja - Optimization for Q6_K - Support and optimization for Q4_K +- Improve flash attention