Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenCl compiling issue #1571

Closed
ghost opened this issue May 23, 2023 · 58 comments
Closed

OpenCl compiling issue #1571

ghost opened this issue May 23, 2023 · 58 comments

Comments

@ghost
Copy link

ghost commented May 23, 2023

Hi, I'm trying to compile llama.cpp using my opencl drivers. My device is a Samsung s10+ with termux.

On downloading and attempting make with LAMA_CLBLAST=1, I receive an error:

ggml-opencl.cpp:8:10: fatal error: 'clblast.h' file not found
#include <clblast.h>

I edited the ggml-open.cl.cpp file TRYING to point it to my opencl libraries by replacing <clblast.h> with ocl_icd.h. (as my library path is /data/data/com.termux/files/usr/include)

Then with make LLAMA_CLBLAST=1 I received this:

I llama.cpp build info:
I UNAME_S:  Linux
I UNAME_P:  unknown
I UNAME_M:  aarch64
I CFLAGS:   -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -DGGML_USE_CLBLAST
I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -DGGML_USE_CLBLAST
I LDFLAGS:  -lclblast -lOpenCL
I CC:       clang version 16.0.4
I CXX:      clang version 16.0.4                      
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
cc  -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -DGGML_USE_CLBLAST   -c ggml.c -o ggml.o
ggml.c:2154:5: warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]         GGML_F16_VEC_REDUCE(sumf, sum);
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:1696:41: note: expanded from macro 'GGML_F16_VEC_REDUCE'                                                 #define GGML_F16_VEC_REDUCE         GGML_F32Cx4_REDUCE
                                        ^
ggml.c:1686:38: note: expanded from macro 'GGML_F32Cx4_REDUCE'
    #define GGML_F32Cx4_REDUCE       GGML_F32x4_REDUCE
                                     ^
ggml.c:1619:11: note: expanded from macro 'GGML_F32x4_REDUCE'
    res = GGML_F32x4_REDUCE_ONE(x[0]);         \
        ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:1607:34: note: expanded from macro 'GGML_F32x4_REDUCE_ONE'
#define GGML_F32x4_REDUCE_ONE(x) vaddvq_f32(x)
                                 ^~~~~~~~~~~~~
ggml.c:3196:9: warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]
        GGML_F16_VEC_REDUCE(sumf[k], sum[k]);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~          ggml.c:1696:41: note: expanded from macro 'GGML_F16_VEC_REDUCE'
    #define GGML_F16_VEC_REDUCE         GGML_F32Cx4_REDUCE                                                                                          ^
ggml.c:1686:38: note: expanded from macro 'GGML_F32Cx4_REDUCE'
    #define GGML_F32Cx4_REDUCE       GGML_F32x4_REDUCE                                     ^
ggml.c:1619:11: note: expanded from macro 'GGML_F32x4_REDUCE'
    res = GGML_F32x4_REDUCE_ONE(x[0]);         \              ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:1607:34: note: expanded from macro 'GGML_F32x4_REDUCE_ONE'
#define GGML_F32x4_REDUCE_ONE(x) vaddvq_f32(x)                                         ^~~~~~~~~~~~~
2 warnings generated.
aarch64-linux-android-clang++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -DGGML_USE_CLBLAST -c llama.cpp -o llama.o
aarch64-linux-android-clang++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -DGGML_USE_CLBLAST -c examples/common.cpp -o common.o
aarch64-linux-android-clang++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -DGGML_USE_CLBLAST -c ggml-opencl.cpp -o ggml-opencl.o
ggml-opencl.cpp:694:13: error: use of undeclared identifier 'clblast'                                                   clblast::StatusCode status = clblast::Gemm<cl_float>(clblast::Layout::kColMajor,
            ^
ggml-opencl.cpp:694:42: error: use of undeclared identifier 'clblast'                                                   clblast::StatusCode status = clblast::Gemm<cl_float>(clblast::Layout::kColMajor,                                                         ^
ggml-opencl.cpp:694:56: error: unexpected type name 'cl_float': expected expression                                     clblast::StatusCode status = clblast::Gemm<cl_float>(clblast::Layout::kColMajor,
                                                       ^
ggml-opencl.cpp:694:66: error: use of undeclared identifier 'clblast'                                                   clblast::StatusCode status = clblast::Gemm<cl_float>(clblast::Layout::kColMajor,
                                                                 ^
ggml-opencl.cpp:695:56: error: use of undeclared identifier 'clblast'                                                                                              clblast::Transpose::kYes, clblast::Transpose::kNo,
                                                       ^
ggml-opencl.cpp:695:82: error: use of undeclared identifier 'clblast'                                                                                              clblast::Transpose::kYes, clblast::Transpose::kNo,
                                                                                 ^
ggml-opencl.cpp:704:27: error: use of undeclared identifier 'clblast'                                                   if (status != clblast::StatusCode::kSuccess) {
                          ^                           ggml-opencl.cpp:798:13: error: use of undeclared identifier 'clblast'
            clblast::StatusCode status = clblast::Gemm<cl_half>(clblast::Layout::kColMajor,                             ^
ggml-opencl.cpp:798:42: error: use of undeclared identifier 'clblast'
            clblast::StatusCode status = clblast::Gemm<cl_half>(clblast::Layout::kColMajor,                                                          ^            ggml-opencl.cpp:798:56: error: unexpected type name 'cl_half': expected expression                                      clblast::StatusCode status = clblast::Gemm<cl_half>(clblast::Layout::kColMajor,                                                                        ^                                                    ggml-opencl.cpp:798:65: error: use of undeclared identifier 'clblast'
            clblast::StatusCode status = clblast::Gemm<cl_half>(clblast::Layout::kColMajor,
                                                                ^
ggml-opencl.cpp:799:56: error: use of undeclared identifier 'clblast'                                                                                              clblast::Transpose::kYes, clblast::Transpose::kNo,                                                          ^                                                    ggml-opencl.cpp:799:82: error: use of undeclared identifier 'clblast'                                                                                              clblast::Transpose::kYes, clblast::Transpose::kNo,
                                                                                 ^
ggml-opencl.cpp:808:27: error: use of undeclared identifier 'clblast'
            if (status != clblast::StatusCode::kSuccess) {                                                                            ^
ggml-opencl.cpp:910:17: error: use of undeclared identifier 'clblast'
                clblast::StatusCode status = clblast::Gemm<cl_float>(clblast::Layout::kColMajor,
                ^                                     ggml-opencl.cpp:910:46: error: use of undeclared identifier 'clblast'
                clblast::StatusCode status = clblast::Gemm<cl_float>(clblast::Layout::kColMajor,
                                             ^        ggml-opencl.cpp:910:60: error: unexpected type name 'cl_float': expected expression                                         clblast::StatusCode status = clblast::Gemm<cl_float>(clblast::Layout::kColMajor,
                                                           ^
ggml-opencl.cpp:910:70: error: use of undeclared identifier 'clblast'
                clblast::StatusCode status = clblast::Gemm<cl_float>(clblast::Layout::kColMajor,
                                                                     ^
ggml-opencl.cpp:911:60: error: use of undeclared identifier 'clblast'
                                                           clblast::Transpose::kYes, clblast::Transpose::kNo,                                                                                                                ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]                                         20 errors generated.
make: *** [Makefile:150: ggml-opencl.o] Error 1

Current Behavior

It appears my libraries for opencl are not included and I don't know how to make llama.cpp recognize them during compilation.

clinfo:

Number of platforms                               1
  Platform Name                                   clvk
  Platform Vendor                                 clvk
  Platform Version                                OpenCL 3.0 clvk
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd cl_khr_extended_versioning
  Platform Extensions with Version                cl_khr_icd                                                       0x400000 (1.0.0)
                                                  cl_khr_extended_versioning                                       0x400000 (1.0.0)
  Platform Numeric Version                        0xc00000 (3.0.0)
  Platform Extensions function suffix             clvk
  Platform Host timer resolution                  0ns

  Platform Name                                   clvk
Number of devices                                 1
  Device Name                                     Adreno (TM) 640
  Device Vendor                                   FIXME
  Device Vendor ID                                0x5143
  Device Version                                  OpenCL 3.0 CLVK on Vulkan v1.1.128 driver 2149539840
  Device UUID                                     43510000-0500-0000-8002-500014008002
  Driver UUID                                     02000000-0000-0000-0000-000000000000
  Valid Device LUID                               No
  Device LUID                                     0000-000000000000
  Device Node Mask                                0
  Device Numeric Version                          0xc00000 (3.0.0)
  Driver Version                                  3.0 CLVK on Vulkan v1.1.128 driver 2149539840
  Device OpenCL C Version                         OpenCL C 1.2 CLVK on Vulkan v1.1.128 driver 2149539840
  Device OpenCL C Numeric Version                 0x402000 (1.2.0)
  Device OpenCL C all versions                    OpenCL C                                                         0x400000 (1.0.0)
                                                  OpenCL C                                                         0x401000 (1.1.0)
                                                  OpenCL C                                                         0x402000 (1.2.0)
                                                  OpenCL C                                                         0xc00000 (3.0.0)
  Device OpenCL C features                        __opencl_c_images                                                0xc00000 (3.0.0)
                                                  __opencl_c_read_write_images                                     0xc00000 (3.0.0)
                                                  __opencl_c_3d_image_writes                                       0xc00000 (3.0.0)
                                                  __opencl_c_atomic_order_acq_rel                                  0xc00000 (3.0.0)
                                                  __opencl_c_atomic_scope_device                                   0xc00000 (3.0.0)
                                                  __opencl_c_subgroups                                             0xc00000 (3.0.0)
  Latest conformance test passed                  FIXME
  Device Type                                     GPU, Default
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               2
  Max clock frequency                             0MHz
  Device Partition                                (core)
    Max number of sub-devices                     0
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x64
  Max work group size                             1024
  Preferred work group size multiple (device)     16
  Preferred work group size multiple (kernel)     16
  Max sub-groups per work group                   16
  Preferred / native vector sizes
    char                                                 1 / 1
    short                                                1 / 1
    int                                                  1 / 1
    long                                                 1 / 1
    half                                                 1 / 1        (n/a)
    float                                                1 / 1
    double                                               1 / 1        (n/a)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Double-precision Floating-point support         (n/a)
  Address bits                                    32, Little-Endian
  Global memory size                              2147483648 (2GiB)
  Error Correction support                        No
  Max memory allocation                           536870912 (512MiB)
  Unified memory for Host and Device              Yes
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 No
    Fine-grained buffer sharing                   No
    Fine-grained system sharing                   No
    Atomics                                       No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Preferred alignment for atomics
    SVM                                           0 bytes
    Global                                        0 bytes
    Local                                         0 bytes
  Atomic memory capabilities                      relaxed, acquire/release, work-group scope, device scope
  Atomic fence capabilities                       relaxed, acquire/release, work-item scope, work-group scope, device scope
  Max size for global variable                    0
  Preferred total size of global vars             0
  Global Memory cache type                        None
  Image support                                   Yes
    Max number of samplers per kernel             20
    Max size for 1D images from buffer            16384 pixels
    Max 1D or 2D image array size                 2048 images
    Base address alignment for 2D image buffers   0 bytes
    Pitch alignment for 2D image buffers          0 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             2048x2048x2048 pixels
    Max number of read image args                 524288
    Max number of write image args                524288
    Max number of read/write image args           524288
  Pipe support                                    No
  Max number of pipe args                         0
  Max active pipe reservations                    0
  Max pipe packet size                            0
  Local memory type                               Local
  Local memory size                               32768 (32KiB)
  Max number of constant args                     8
  Max constant buffer size                        65536 (64KiB)
  Generic address space support                   No
  Max size of kernel argument                     1024
  Queue properties (on host)
    Out-of-order execution                        No
    Profiling                                     Yes
  Device enqueue capabilities                     (n/a)
  Queue properties (on device)
    Out-of-order execution                        No
    Profiling                                     No
    Preferred size                                0
    Max size                                      0
  Max queues on device                            0
  Max events on device                            0
  Prefer user sync for interop                    Yes
  Profiling timer resolution                      1ns
  Execution capabilities
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Non-uniform work-groups                       Yes
    Work-group collective functions               No
    Sub-group independent forward progress        No
    IL version                                    SPIR-V_1.0
    ILs with version                              SPIR-V                                                           0x400000 (1.0.0)
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                (n/a)
  Built-in kernels with version                   (n/a)
  Device Extensions                               cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_extended_versioning cl_khr_create_command_queue cl_khr_il_program cl_khr_spirv_no_integer_wrap_decoration cl_arm_non_uniform_work_group_size cl_khr_suggested_local_work_size cl_khr_3d_image_writes cl_khr_device_uuid
  Device Extensions with Version                  cl_khr_global_int32_base_atomics                                 0x400000 (1.0.0)
                                                  cl_khr_global_int32_extended_atomics                             0x400000 (1.0.0)
                                                  cl_khr_local_int32_base_atomics                                  0x400000 (1.0.0)
                                                  cl_khr_local_int32_extended_atomics                              0x400000 (1.0.0)
                                                  cl_khr_byte_addressable_store                                    0x400000 (1.0.0)
                                                  cl_khr_extended_versioning                                       0x400000 (1.0.0)
                                                  cl_khr_create_command_queue                                      0x400000 (1.0.0)
                                                  cl_khr_il_program                                                0x400000 (1.0.0)
                                                  cl_khr_spirv_no_integer_wrap_decoration                          0x400000 (1.0.0)
                                                  cl_arm_non_uniform_work_group_size                               0x400000 (1.0.0)
                                                  cl_khr_suggested_local_work_size                                 0x400000 (1.0.0)
                                                  cl_khr_3d_image_writes                                           0x400000 (1.0.0)
                                                  cl_khr_device_uuid                                               0x400000 (1.0.0)

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  clvk
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [clvk]
  clCreateContext(NULL, ...) [default]            Success [clvk]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 clvk
    Device Name                                   Adreno (TM) 640
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 clvk
    Device Name                                   Adreno (TM) 640
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 clvk
    Device Name                                   Adreno (TM) 640

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.3.1
  ICD loader Profile                              OpenCL 3.0

lscpu:


Architecture:           aarch64
  CPU op-mode(s):       32-bit, 64-bit
  Byte Order:           Little Endian
CPU(s):                 8
  On-line CPU(s) list:  0-7
Vendor ID:              Qualcomm
  Model name:           Kryo-4XX-Silver
    Model:              14
    Thread(s) per core: 1
    Core(s) per socket: 4
    Socket(s):          1
    Stepping:           0xd
    CPU(s) scaling MHz: 62%
    CPU max MHz:        1785.6000
    CPU min MHz:        300.0000
    BogoMIPS:           38.40
    Flags:              fp asimd evtstrm aes pmull sha
                        1 sha2 crc32 atomics fphp asim
                        dhp cpuid asimdrdm lrcpc dcpop
                         asimddp
  Model name:           Kryo-4XX-Gold
    Model:              14
    Thread(s) per core: 1
    Core(s) per socket: 2
    Socket(s):          2
    Stepping:           0xd
    CPU(s) scaling MHz: 71%
    CPU max MHz:        2841.6001
    CPU min MHz:        710.4000
    BogoMIPS:           38.40
    Flags:              fp asimd evtstrm aes pmull sha
                        1 sha2 crc32 atomics fphp asim
                        dhp cpuid asimdrdm lrcpc dcpop
                         asimddp

clpeak:

                                                      Platform: clvk                                          Device: Adreno (TM) 640
    Driver version  : 3.0 CLVK on Vulkan v1.1.128 driver 2149539840 (Android)                                   Compute units   : 2                                   Clock frequency : 0 MHz
                                                          Global memory bandwidth (GBPS)                          float   : 21.86
      float2  : 24.10
      float4  : 19.43
      float8  : 10.23
      float16 : 8.94
                                                          Single-precision compute (GFLOPS)
      float   : 369.29
      float2  : 273.19
      float4  : 309.08                                      float8  : 507.69
      float16 : 523.76

    No half precision support! Skipped                
    No double precision support! Skipped

    Integer compute (GIOPS)                                 int   : 109.64
      int2  : 71.84
      int4  : 139.36
      int8  : 80.51                                         int16 : 78.88

    Integer compute Fast 24bit (GIOPS)
      int   : 108.55                                        int2  : 71.70
      int4  : 139.01
      int8  : 80.41
      int16 : 77.72                                   
    Transfer bandwidth (GBPS)
      enqueueWriteBuffer              : 8.22
      enqueueReadBuffer               : 1.04                enqueueWriteBuffer non-blocking : 8.67
      enqueueReadBuffer non-blocking  : 1.05
      enqueueMapBuffer(for read)      : 8992.81
        memcpy from mapped ptr        : 1.04                enqueueUnmap(after write)       : 58355.54
        memcpy to mapped ptr          : 8.60

    Kernel launch latency : 27.10 us

Thanks for any direction on this matter.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 23, 2023

Try using CMake instead, it is much better at finding libraries and even then it can also be better manually configured in cases when it doesn't find something at first.

@ekolawole
Copy link

how to run in CLI with cmake?

@ghost
Copy link
Author

ghost commented May 23, 2023

Try using CMake instead, it is much better at finding libraries and even then it can also be better manually configured in cases when it doesn't find something at first.

Thanks for your response. It appears to have compiled, but now I can't run ./main as it says no command found.

Is there anyone that can assist me with compiling so that I can use ./main?

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 23, 2023

With CMake main is in the subdirectory bin of the build directory.

@ghost
Copy link
Author

ghost commented May 23, 2023

With CMake main is in the subdirectory bin of the build directory.

Lovely, thank you for the direction. I can run ./main from the bin subfolder.

It appears clblast does not have a system_info label like openBlas does (llama.cpp shows BLAS=1 when compiled with openBlas), so I'll try and test another way to see if my GPU is engaged.

To clarify, clblast is an alternative to openblas, is that right?

I assume I can't run both openBlas and Clblast at the same time, but maybe I'm missing something.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 23, 2023

It seems like it was not compiled in, then. It should show which platform and device it uses on start up and BLAS = 1 should also show. You need to turn on the LLAMA_CLBLAST option, you can do that on the command line with -DLLAMA_CLBLAST=ON when running CMake or editing the CMakeCache.txt or using a tool such as ccmake.

@ghost
Copy link
Author

ghost commented May 23, 2023

Thanks again for the information.

I am trying to compile using cmake . -DLLAMA_CLBLAST=ON

CMake Warning at CMakeLists.txt:210 (message):
  CLBlast not found

Neither make nor cmake find the library, so I'm still uncertain how to actually point llama.cpp to my libraries in /data/data/com.termux/files/usr/include/CL

Edit: to clarify, editing line in cMakeCache.txt,

CLBlast_DIR:PATH=CLBlast_DIR-NOTFOUND

To

CLBlast_DIR:PATH /data/data/com.termux/files/usr/include/CL

And then trying cmake . -DLLAMA_CLBLAST=ON gives me this:

CMake Warning at CMakeLists.txt:107 (message):
  Git repository not found; to enable automatic generation of build info,
  make sure Git is installed and the project is a Git repository.


CMake Warning at CMakeLists.txt:200 (find_package):
  By not providing "FindCLBlast.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "CLBlast", but
  CMake did not find one.

  Could not find a package configuration file provided by "CLBlast" with any
  of the following names:

    CLBlastConfig.cmake
    clblast-config.cmake

  Add the installation prefix of "CLBlast" to CMAKE_PREFIX_PATH or set
  "CLBlast_DIR" to a directory containing one of the above files.  If
  "CLBlast" provides a separate development package or SDK, be sure it has
  been installed.


CMake Warning at CMakeLists.txt:210 (message):
  CLBlast not found


-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- ARM detected
-- Configuring done (0.1s)
-- Generating done (0.1s)
-- Build files have been written to: /data/data/com.termux/files/home/newllama

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 23, 2023

CLBlast_DIR is supposed to point at CLBlast's CMake files, on my system it is /usr/local/lib/cmake/CLBlast.

You can try to use CMAKE_PREFIX_PATH (environment variable):

cd build
rm -r *   # restart configuration just in case
CMAKE_PREFIX_PATH=/data/data/com.termux/files/usr cmake .. -DLLAMA_CLBLAST=ON

I don't really know how Termux works though.

@ghost
Copy link
Author

ghost commented May 23, 2023

CLBlast_DIR is supposed to point at CLBlast's CMake files, on my system it is /usr/local/lib/cmake/CLBlast.

You can try to use CMAKE_PREFIX_PATH (environment variable):

cd build
rm -r *   # restart configuration just in case
CMAKE_PREFIX_PATH=/data/data/com.termux/files/usr cmake .. -DLLAMA_CLBLAST=ON

I don't really know how Termux works though.

I'll mess around with it tonight, and let you know how it goes tomorrow. Thanks for the cmake_prefix_path idea.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 23, 2023

OK, got Termux running in Docker.

First install some packages:

pkg update
pkg upgrade
apt install clang cmake cmake-curses-gui opencl-headers ocl-icd

Install CLBlast:

cd
git clone https://github.com/CNugteren/CLBlast.git
cd CLBlast
cmake -B build \
  -DBUILD_SHARED_LIBS=OFF \
  -DTUNERS=OFF \
  -DCMAKE_BUILD_TYPE=Release \
  -DCMAKE_INSTALL_PREFIX=/data/data/com.termux/files/usr
cd build
make -j8
make install

Build llama.cpp:

cd
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp/
cmake -B build -DLLAMA_CLBLAST=ON
cd build
make -j8

@ghost
Copy link
Author

ghost commented May 24, 2023

This is fantasic, @SlyEcho . I genuinely appreciate.

I'm stuck during install CLBLAST. I run

cmake -B build \
  -DBUILD_SHARED_LIBS=OFF \
  -DTUNERS=OFF \
  -DCMAKE_BUILD_TYPE=Release \
  -DCMAKE_PREFIX_PATH=/data/data/com.termux/files/usr

And receive

CMake Deprecation Warning at CMakeLists.txt:12 (cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- Building CLBlast with OpenCL API (default)
-- Configuring done (0.0s)
-- Generating done (0.0s)
-- Build files have been written to: /data/data/com.termux/files/home/CLBlast/build

I tried continuing with,

cd build
make -j8

And am stuck on:

[100%] Built target clblast
Install the project...
-- Install configuration: "Release"
CMake Error at cmake_install.cmake:46 (file):
  file cannot create directory: /usr/local/lib.  Maybe need administrative
  privileges.


make: *** [Makefile:100: install] Error 1

Now of course llama.cpp is saying CLBlast not found.

I'm confused as to what is exactly causing the problem that I can't make install.

It feels like we're very close though, so thanks again for coming this far!

Please let me know if there's anything I can do to force this make install.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

CMake Error at cmake_install.cmake:46 (file):
 file cannot create directory: /usr/local/lib.  Maybe need administrative
 privileges.

It is trying to install into /usr/local/lib,

I made a mistake, it should be CMAKE_INSTALL_PREFIX

You can reconfigure with:

cmake .. -DCMAKE_INSTALL_PREFIX=/data/data/com.termux/files/usr
make install

If that path is not allowed either you can install in some home folder and then point llama.cpp to it with the CMAKE_PREFIX_PATH environment variable.

@ghost
Copy link
Author

ghost commented May 24, 2023

CMake Error at cmake_install.cmake:46 (file):
 file cannot create directory: /usr/local/lib.  Maybe need administrative
 privileges.

It is trying to install into /usr/local/lib,

I made a mistake, it should be CMAKE_INSTALL_PREFIX

You can reconfigure with:

cmake .. -DCMAKE_INSTALL_PREFIX=/data/data/com.termux/files/usr
make install

If that path is not allowed either you can install in some home folder and then point llama.cpp to it with the CMAKE_PREFIX_PATH environment variable.

Thank you. This worked for me. I'm saving these posts for myself to learn from. Llama.cpp found clblast, and I'm able to build it.

Now, I'm getting an error running ./main, so I might reference it in a new issue, but here's the message,

source:1:2115: warning: implicit conversion from 'const __private int32_t' (aka 'const __private int') to 'float' may lose precision                              typedef char int8_t; typedef uchar uint8_t; typedef int int32_t; typedef uint uint32_t; struct __attribute__ ((packed)) block_q4_0 { half d; uint8_t qs[QK4_0 / 2]; }; struct __attribute__ ((packed)) block_q4_1 { half d; half m; uint8_t qs[QK4_1 / 2]; }; struct __attribute__ ((packed)) block_q5_0 { half d; uint32_t qh; uint8_t qs[QK5_0 / 2]; }; struct __attribute__ ((packed)) block_q5_1 { half d; half m; uint32_t qh; uint8_t qs[QK5_1 / 2]; }; struct __attribute__ ((packed)) block_q8_0 { half d; int8_t qs[QK8_0]; }; __kernel void convert_fp16_to_fp32(__global half* x, __global float* y) { const uint i = get_global_id(0); y[i] = vload_half(0, &x[i]); } void dequantize_q4_0(__global const struct block_q4_0* x, const int ib, const int iqs, float* v0, float* v1) { const float d = vload_half(0, &x[ib].d); const uint8_t vui = x[ib].qs[iqs]; const int8_t vi0 = vui & 0xF; const int8_t vi1 = vui >> 4; *v0 = (vi0 - 8)*d; *v1 = (vi1 - 8)*d; } void dequantize_q4_1(__global const struct block_q4_1* x, const int ib, const int iqs, float* v0, float* v1) { const float d = vload_half(0, &x[ib].d); const float m = vload_half(0, &x[ib].m); const uint8_t vui = x[ib].qs[iqs]; const int8_t vi0 = vui & 0xF; const int8_t vi1 = vui >> 4; *v0 = vi0*d + m; *v1 = vi1*d + m; } void dequantize_q5_0(__global const struct block_q5_0* x, const int ib, const int iqs, float* v0, float* v1) { const float d = vload_half(0, &x[ib].d); uint32_t qh = x[ib].qh; const uint8_t xh_0 = ((qh >> (iqs + 0)) << 4) & 0x10; const uint8_t xh_1 = ((qh >> (iqs + 12)) ) & 0x10; const int32_t x0 = ((x[ib].qs[iqs] & 0xf) | xh_0) - 16; const int32_t x1 = ((x[ib].qs[iqs] >> 4) | xh_1) - 16; *v0 = x0*d; *v1 = x1*d; } void dequantize_q5_1(__global const struct block_q5_1* x, const int ib, const int iqs, float* v0, float* v1) { const float d = vload_half(0, &x[ib].d); const float m = vload_half(0, &x[ib].m); uint32_t qh = x[ib].qh; const uint8_t xh_0 = ((qh >> (iqs + 0)) << 4) & 0x10; const uint8_t xh_1 = ((qh >> (iqs + 12)) ) & 0x10; const int32_t x0 = ((x[ib].qs[iqs] & 0xf) | xh_0); const int32_t x1 = ((x[ib].qs[iqs] >> 4) | xh_1); *v0 = x0*d + m; *v1 = x1*d + m; } void dequantize_q8_0(__global const struct block_q8_0* x, const int ib, const int iqs, float* v0, float* v1) { const float d = vload_half(0, &x[ib].d); const int8_t vi0 = x[ib].qs[iqs + 0]; const int8_t vi1 = x[ib].qs[iqs + 1]; *v0 = vi0*d; *v1 = vi1*d; } void convert_f16(__global half* x, const int ib, const int iqs, float* v0, float* v1){ *v0 = vload_half(0, &x[ib + 0]); *v1 = vload_half(0, &x[ib + 1]); }                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         

To clarify, I am able to build, and run llama.cpp using cmake, but with clblast enabled then I'm getting this a huge error in ./main

Again, thanks to you for helping me get it compiled at all!

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

The messages are impossible to read because the CL program doesn't have line breaks, but here are the errors:

source:2:35: error: 16-bit storage is not supported for SSBOs
source:2:35: error: 8-bit storage is not supported for SSBOs
source:3:43: error: 16-bit storage is not supported for SSBOs
source:3:43: error: 8-bit storage is not supported for SSBOs
source:4:35: error: 16-bit storage is not supported for SSBOs
source:4:35: error: 8-bit storage is not supported for SSBOs
source:5:43: error: 16-bit storage is not supported for SSBOs
source:5:43: error: 8-bit storage is not supported for SSBOs
source:6:35: error: scalar elements must be aligned to their size

It just seems like this device doesn't support llama.cpp, maybe it only works with float32 numbers?

@0cc4m, what do you think?

@ghost
Copy link
Author

ghost commented May 24, 2023

The messages are impossible to read because the CL program doesn't have line breaks, but here are the errors:

source:2:35: error: 16-bit storage is not supported for SSBOs
source:2:35: error: 8-bit storage is not supported for SSBOs
source:3:43: error: 16-bit storage is not supported for SSBOs
source:3:43: error: 8-bit storage is not supported for SSBOs
source:4:35: error: 16-bit storage is not supported for SSBOs
source:4:35: error: 8-bit storage is not supported for SSBOs
source:5:43: error: 16-bit storage is not supported for SSBOs
source:5:43: error: 8-bit storage is not supported for SSBOs
source:6:35: error: scalar elements must be aligned to their size

It just seems like this device doesn't support llama.cpp, maybe it only works with float32 numbers?

@0cc4m, what do you think?

Thanks for cleaning the error message.

I'm confused because I use llama.cpp everyday, so it's definitely supported. Openblas works as expected. Perhaps it's just clblast that isn't supported? Which is still odd because running clpeak shows,

clpeak                                                                    Driver version  : 3.0 CLVK on Vulkan v1.1.128 driver 2149539840 (Android)                                   Compute units   : 2                                   Clock frequency : 0 MHz
                                                          Global memory bandwidth (GBPS)                          float   : 21.86
      float2  : 24.10
      float4  : 19.43
      float8  : 10.23
      float16 : 8.94
                                                          Single-precision compute (GFLOPS)
      float   : 369.29
      float2  : 273.19
      float4  : 309.08                                      float8  : 507.69
      float16 : 523.76

    No half precision support! Skipped                
    No double precision support! Skipped

    Integer compute (GIOPS)                                 int   : 109.64
      int2  : 71.84
      int4  : 139.36
      int8  : 80.51                                         int16 : 78.88

    Integer compute Fast 24bit (GIOPS)
      int   : 108.55                                        int2  : 71.70
      int4  : 139.01
      int8  : 80.41
      int16 : 77.72                                   
    Transfer bandwidth (GBPS)
      enqueueWriteBuffer              : 8.22
      enqueueReadBuffer               : 1.04                enqueueWriteBuffer non-blocking : 8.67
      enqueueReadBuffer non-blocking  : 1.05
      enqueueMapBuffer(for read)      : 8992.81
        memcpy from mapped ptr        : 1.04                enqueueUnmap(after write)       : 58355.54
        memcpy to mapped ptr          : 8.60

    Kernel launch latency : 27.10 us

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

I'm confused because I use llama.cpp everyday, so it's definitely supported. Openblas works as expected. Perhaps it's just clblast that isn't supported? Which is still odd because running clpeak shows,

OpenBLAS runs on the CPU. OpenCL runs on the GPU.

No half precision support! Skipped 

That's it. llama.cpp uses half in all the quantized formats and in other internal computations, too.

@ghost
Copy link
Author

ghost commented May 24, 2023

I'm confused because I use llama.cpp everyday, so it's definitely supported. Openblas works as expected. Perhaps it's just clblast that isn't supported? Which is still odd because running clpeak shows,

OpenBLAS runs on the CPU. OpenCL runs on the GPU.

No half precision support! Skipped 

That's it. llama.cpp uses half in all the quantized formats and in other internal computations, too.

I understand now, so my device with openCL is currently incompatible. That sucks, but I'm happy to know for sure.

:)

Edit: sincerely! I would've spent weeks trying to figure that out by myself, so learning it can't be done in 24hours helps me a lot.

@ghost ghost closed this as completed May 24, 2023
@0cc4m
Copy link
Collaborator

0cc4m commented May 24, 2023

@SlyEcho @JackJollimore Half precision support isn't required. Otherwise no Nvidia GPU would work at all.

@ghost ghost reopened this May 24, 2023
@ghost
Copy link
Author

ghost commented May 24, 2023

Thank you for clarifying.

@0cc4m
Copy link
Collaborator

0cc4m commented May 24, 2023

Kinda interesting you use clvk. Did you install that yourself or does it come with the phone?

@ghost
Copy link
Author

ghost commented May 24, 2023

Kinda interesting you use clvk. Did you install that yourself or does it come with the phone?

It's a package that's available through termux repository, and my device has a vulkan chip so I installed it.

Should I try again without it?

Edit: trying without it:

I uninstalled clvk, then rebuilt using Slyechos instructions and CNugteren/CLBlast.git

Here's my clinfo after removing clvk:

clinfo
Number of platforms                               0

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.3.1
  ICD loader Profile                              OpenCL 3.0

Clpeak:

clpeak
clGetPlatformIDs (-1001)
no platforms found

And then of course ./main

main: build = 0 (unknown)
main: seed  = 1684942069
ggml_opencl: clGetPlatformIDs(NPLAT, platform_ids, &n_platforms) error -1001 at /data/data/com.termux/files/home/nllama/ggml-opencl.cpp:344

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

I'm gonna see if I can get this clvk working on my machine.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

They are using something called Clspv to compile CL kernels to Vulkan SPIR-V. This is what it supports: OpenCL C 1.2 Language on Vulkan

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

It's very experimental, I didn't get it working on my desktop GPU and llama.cpp, some kind of LLVM error.

@ghost
Copy link
Author

ghost commented May 24, 2023

It's very experimental, I didn't get it working on my desktop GPU and llama.cpp, some kind of LLVM error.

I wouldn't even know where to begin with such a thing, but if there's anything I can do to try, or whatever then please let me know.

I can still run llama.cpp without it, so for me: any progress in this direction is a bonus.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

I don't think it's going to work with this CL driver for a long time, it's experimental.

Maybe when we get a Vulkan version of a WebGPU version, we can run on more devices.

@0cc4m
Copy link
Collaborator

0cc4m commented May 24, 2023

They do claim CLBlast support. Maybe clvk is a way for Nvidia GPUs to run FP16 on OpenCL.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

It's possible it may work with just CLBlast as it was in the earlier commits, when the CPU dequantized and converted to float before the matrix multiplication. But we are now doing a lot more with some of that code working on the GPU.

@ghost
Copy link
Author

ghost commented May 24, 2023

The earliest one is this: #1164

here's make -j8 for 7296c96

[  7%] Building C object CMakeFiles/ggml.dir/ggml-opencl.c.o
[  7%] Building C object CMakeFiles/ggml.dir/ggml.c.o
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:42:9: error: call to undeclared library function 'exit' with type 'void (int) __attribute__((noreturn))'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
        exit(1);
        ^
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:42:9: note: include the header <stdlib.h> or explicitly provide a declaration for 'exit'
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:49:31: error: call to undeclared library function 'malloc' with type 'void *(unsigned long)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
        program_log = (char*) malloc(log_size + 1);
                              ^
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:49:31: note: include the header <stdlib.h> or explicitly provide a declaration for 'malloc'
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:62:36: error: call to undeclared function 'getenv'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    char * GGML_CLBLAST_PLATFORM = getenv("GGML_CLBLAST_PLATFORM");
                                   ^
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:62:12: error: incompatible integer to pointer conversion initializing 'char *' with an expression of type 'int' [-Wint-conversion]
    char * GGML_CLBLAST_PLATFORM = getenv("GGML_CLBLAST_PLATFORM");
           ^                       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:63:12: error: incompatible integer to pointer conversion initializing 'char *' with an expression of type 'int' [-Wint-conversion]
    char * GGML_CLBLAST_DEVICE = getenv("GGML_CLBLAST_DEVICE");
           ^                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:161:9: error: call to undeclared library function 'abort' with type 'void (void) __attribute__((noreturn))'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
        abort();
        ^
/data/data/com.termux/files/home/ttllama/ggml-opencl.c:161:9: note: include the header <stdlib.h> or explicitly provide a declaration for 'abort'
6 errors generated.
make[2]: *** [CMakeFiles/ggml.dir/build.make:90: CMakeFiles/ggml.dir/ggml-opencl.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/data/data/com.termux/files/home/ttllama/ggml.c:3833:20: warning: unused function 'ggml_vec_silu_f16' [-Wunused-function]
inline static void ggml_vec_silu_f16(const int n, ggml_fp16_t * y, const ggml_fp16_t * x) {
                   ^
/data/data/com.termux/files/home/ttllama/ggml.c:4303:19: warning: unused function 'ggml_up64' [-Wunused-function]
static inline int ggml_up64(int n) {
                  ^
2 warnings generated.
make[1]: *** [CMakeFiles/Makefile2:294: CMakeFiles/ggml.dir/all] Error 2
make: *** [Makefile:101: all] Error 2

I also tried fb62f92, which successfully built with clBlast, but then ./main

main: build = 0 (unknown)
main: seed  = 1684965335
llama.cpp: loading model from /data/data/com.termux/files/home/llama.cpp/models/Wizard-Vicuna-7B-Uncensored.ggmlv2.q4_0.bin
llama_model_load_internal: format     = ggjt v2 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =  68.20 KB
llama_model_load_internal: mem required  = 5809.33 MB (+ 1026.00 MB per state)

Initializing CLBlast (First Run)...
Attempting to use: Platform=0, Device=0 (If invalid, program will crash)
Using Platform: clvk Device: Adreno (TM) 640
source:1:81: error: 8-bit storage is not supported for SSBOs
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                ^
source:1:477: error: 8-bit storage is not supported for SSBOs
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            ^
source:1:1185: warning: implicit conversion loses integer precision: 'uint' (aka 'unsigned int') to 'uchar' (aka 'unsigned char')
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           ~~~   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~
source:1:1256: warning: implicit conversion loses integer precision: 'uint' (aka 'unsigned int') to 'uchar' (aka 'unsigned char')
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  ~~~   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~
source:1:902: error: 8-bit storage is not supported for SSBOs
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     ^
source:1:1869: warning: implicit conversion loses integer precision: 'uint' (aka 'unsigned int') to 'uchar' (aka 'unsigned char')
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       ~~~   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~
source:1:1940: warning: implicit conversion loses integer precision: 'uint' (aka 'unsigned int') to 'uchar' (aka 'unsigned char')
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              ~~~   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~
source:1:1491: error: 16-bit storage is not supported for SSBOs
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  ^
source:1:1491: error: 8-bit storage is not supported for SSBOs
source:1:2331: warning: no newline at end of file
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          ^
source:1:2148: error: 8-bit storage is not supported for SSBOs
struct block_q4_0 { float d; uchar qs[16]; }; __kernel void dequantize_row_q4_0(__global struct block_q4_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = ((vi & 0xf) - 8)*d; result[index + 1] = ((vi >> 4) - 8)*d; } struct block_q4_1 { float d; float m; uchar qs[16]; }; __kernel void dequantize_row_q4_1(__global struct block_q4_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const float m = blocks[i].m; const uchar vi = blocks[i].qs[l]; const uint index = i*32 + l*2; result[index + 0] = (vi & 0xf) * d + m; result[index + 1] = (vi >> 4) * d + m; } struct block_q5_0 { float d; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_0(__global struct block_q5_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = blocks[i].d; const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = (((vi & 0xf) | vh0) - 16)*d; result[index + 1] = (((vi >> 4) | vh1) - 16)*d; } struct block_q5_1 { ushort d; ushort m; uint qh; uchar qs[16]; }; __kernel void dequantize_row_q5_1(__global struct block_q5_1* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); const float d = vload_half(0, (__global half*) &blocks[i].d); const float m = vload_half(0, (__global half*) &blocks[i].m); const uchar vi = blocks[i].qs[l]; const uint l2 = l * 2; const uchar vh0 = ((blocks[i].qh & (1 << (l2 + 0))) >> (l2 + 0)) << 4; const uchar vh1 = ((blocks[i].qh & (1 << (l2 + 1))) >> (l2 + 1)) << 4; const uint index = i*32 + l2; result[index + 0] = ((vi & 0xf) | vh0)*d + m; result[index + 1] = ((vi >> 4) | vh1)*d + m; } struct block_q8_0 { float d; char qs[32]; }; __kernel void dequantize_row_q8_0(__global struct block_q8_0* blocks, __global float* result) { const uint i = get_global_id(0) / 32; const uint l = get_local_id(0); result[i*32 + l] = blocks[i].qs[l] * blocks[i].d; }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   ^

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 24, 2023

It's the same errors again. Looks like q5_0, q5_1 and q8_0 are not supported for some reason. maybe if you remove that part it could work?

@ghost
Copy link
Author

ghost commented May 24, 2023

It's the same errors again. Looks like q5_0, q5_1 and q8_0 are not supported for some reason. maybe if you remove that part it could work?

Thanks for your response.

I'm trying to understand, but I'm not that savvy. I'm fine with removing parts to test and see if we can get this to function, but I need more specific directions as to what I need to do.

I didn't know it was possible to remove q5_0, q5_1 and q8_0 from the build.

Edit;

OpenCL is installed, and llama.cpp now compiles with clBlast though it's incompatible.

@ghost ghost closed this as completed May 26, 2023
@0cc4m
Copy link
Collaborator

0cc4m commented May 28, 2023

@JackJollimore Have you checked if your phone has native OpenCL support? I know mine does, you just have to compile clinfo and other OpenCL tools manually instead of using the termux packages.

@ghost
Copy link
Author

ghost commented May 28, 2023

@JackJollimore Have you checked if your phone has native OpenCL support? I know mine does, you just have to compile clinfo and other OpenCL tools manually instead of using the termux packages.

It never occurred to me to try it like that, so I'll try it and let you know how it goes.

@ghost
Copy link
Author

ghost commented May 28, 2023

@JackJollimore Have you checked if your phone has native OpenCL support? I know mine does, you just have to compile clinfo and other OpenCL tools manually instead of using the termux packages.

Thanks again for that. My device does natively support OpenCL.

I manually built clinfo, and here's the details:

./clinfo        
Number of platforms                               1     Platform Name                                   QUALCOMM Snapdragon(TM)                                     Platform Vendor                                 QUALCOMM                                                    Platform Version                                OpenCL 2.0 QUALCOMM build: commit #3dad7f8ed7 changeid #I593c16c433 Date: 10/01/21 Fri Local Branch:  Remote Branch: refs/tags/AU_LINUX_ANDROID_LA.UM.9.1.R1.11.00.00.604.073                                                 Platform Profile                                FULL_PROFILE                                                Platform Extensions                                                                                         Platform Name                                   QUALCOMM Snapdragon(TM)
Number of devices                                 1     Device Name                                     QUALCOMM Adreno(TM)                                         Device Vendor                                   QUALCOMM                                                    Device Vendor ID                                0x5143
  Device Version                                  OpenCL 2.0 Adreno(TM) 640
  Driver Version                                  OpenCL 2.0 QUALCOMM build: commit #3dad7f8ed7 changeid #I593c16c433 Date: 10/01/21 Fri Local Branch:  Remote Branch: refs/tags/AU_LINUX_ANDROID_LA.UM.9.1.R1.11.00.00.604.073 Compiler E031.37.12.01                          Device OpenCL C Version                         OpenCL C 2.0 Adreno(TM) 640                                 Device Type                                     GPU   Device Profile                                  FULL_PROFILE                                                Device Available                                Yes   Compiler Available                              Yes   Linker Available                                Yes   Max compute units                               2     Max clock frequency                             1MHz  Device Partition                                (core)                                                        Max number of sub-devices                     1       Supported partition types                     None    Supported affinity domains                    (n/a)                                                       Max work item dimensions                        3     Max work item sizes                             1024x1024x1024                                              Max work group size                             1024  Preferred work group size multiple (kernel)     128   Preferred / native vector sizes
    char                                                 1 / 1
    short                                                1 / 1                                                  int                                                  1 / 1
    long                                                 1 / 0
    half                                                 1 / 1        (cl_khr_fp16)                             float                                                1 / 1
    double                                               0 / 0        (n/a)
  Half-precision Floating-point support           (cl_khr_fp16)                                                 Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No      Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               No      Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     No      Infinity and NANs                             Yes
    Round to nearest                              Yes     Round to zero                                 No
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               No      Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No    Double-precision Floating-point support         (n/a)
  Address bits                                    64, Little-Endian
  Global memory size                              3911956480 (3.643GiB)
  Error Correction support                        No
  Max memory allocation                           977989120 (932.7MiB)
  Unified memory for Host and Device              Yes
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 Yes
    Fine-grained buffer sharing                   Yes     Fine-grained system sharing                   No
    Atomics                                       Yes
  Minimum alignment for any data type             128 bytes                                                   Alignment of base address                       1024 bits (128 bytes)
  Page size (QCOM)                                4096 bytes
  External memory padding (QCOM)                  0 bytes
  Preferred alignment for atomics
    SVM                                           128 bytes
    Global                                        0 bytes
    Local                                         0 bytes                                                     Max size for global variable                    65536 (64KiB)
  Preferred total size of global vars             1048576 (1024KiB)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        131072 (128KiB)                                             Global Memory cache line size                   64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             16
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 2048 images
    Base address alignment for 2D image buffers   64 bytes                                                      Pitch alignment for 2D image buffers          64 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             16384x16384x2048 pixels
    Max number of read image args                 128     Max number of write image args                64
    Max number of read/write image args           64
  Max number of pipe args                         16
  Max active pipe reservations                    7680
  Max pipe packet size                            1024
  Local memory type                               Local
  Local memory size                               32768 (32KiB)
  Max number of constant args                     8
  Max constant buffer size                        65536 (64KiB)
  Max size of kernel argument                     1024
  Queue properties (on host)
    Out-of-order execution                        Yes     Profiling                                     Yes   Queue properties (on device)
    Out-of-order execution                        Yes     Profiling                                     Yes
    Preferred size                                655376 (640KiB)
    Max size                                      655376 (640KiB)
  Max queues on device                            1
  Max events on device                            1024  Prefer user sync for interop                    No
  Profiling timer resolution                      1000ns
  Execution capabilities
    Run OpenCL kernels                            Yes
    Run native kernels                            No
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_3d_image_writes cl_img_egl_image cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_egl_event cl_khr_egl_image cl_khr_fp16 cl_khr_gl_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_image2d_from_buffer cl_khr_mipmap_image cl_khr_srgb_image_writes cl_khr_subgroups cl_qcom_create_buffer_from_image cl_qcom_ext_host_ptr cl_qcom_ion_host_ptr cl_qcom_perf_hint cl_qcom_other_image cl_qcom_subgroup_shuffle cl_qcom_vector_image_ops cl_qcom_extract_image_plane cl_qcom_android_native_buffer_host_ptr cl_qcom_protected_context cl_qcom_priority_hint cl_qcom_compressed_yuv_image_read cl_qcom_compressed_image cl_qcom_ext_host_ptr_iocoherent cl_qcom_accelerated_image_ops cl_qcom_ml_ops
                                                      NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform                                                 clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [P0]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  Invalid device type for platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 QUALCOMM Snapdragon(TM)
    Device Name                                   QUALCOMM Adreno(TM)

I'm trying to run llama.cpp that's compiled with CLBlast enabled, and here's the error from ./main:

main: build = 0 (unknown)                             main: seed  = 1685291154                              ggml_opencl: clGetPlatformIDs(NPLAT, platform_ids, &n_platforms) error -1001 at /data/data/com.termux/files/home/clllama/ggml-opencl.cpp:344

Some kind of error obtaining platform? I dunno what it's trying to say.

@0cc4m
Copy link
Collaborator

0cc4m commented May 28, 2023

Did you compile CLBlast manually as well? I remember some trouble linking it all together, but it did work in the end.

@ghost
Copy link
Author

ghost commented May 28, 2023

Did you compile CLBlast manually as well? I remember some trouble linking it all together, but it did work in the end.

Yes, I compiled CLBlast manually. I restarted the process because I had some other package from termux installed too(ocl-icd). Now I can't compile llama.cpp

[  6%] Built target ggml
[ 12%] Built target llama
make[2]: *** No rule to make target '/data/data/com.termux/files/usr/lib/libOpenCL.so', needed by 'bin/test-tokenizer-0'.  Stop.
make[1]: *** [CMakeFiles/Makefile2:1194: tests/CMakeFiles/test-tokenizer-0.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make[2]: *** No rule to make target '/data/data/com.termux/files/usr/lib/libOpenCL.so', needed by 'bin/quantize'.  Stop.
make[1]: *** [CMakeFiles/Makefile2:1276: examples/quantize/CMakeFiles/quantize.dir/all] Error 2
make[2]: *** No rule to make target '/data/data/com.termux/files/usr/lib/libOpenCL.so', needed by 'bin/test-quantize-fns'.  Stop.
make[1]: *** [CMakeFiles/Makefile2:1113: tests/CMakeFiles/test-quantize-fns.dir/all] Error 2
make[2]: *** No rule to make target '/data/data/com.termux/files/usr/lib/libOpenCL.so', needed by 'bin/test-quantize-perf'.  Stop.
make[1]: *** [CMakeFiles/Makefile2:1140: tests/CMakeFiles/test-quantize-perf.dir/all] Error 2
[ 15%] Built target common
make[2]: *** No rule to make target '/data/data/com.termux/files/usr/lib/libOpenCL.so', needed by 'bin/test-sampling'.  Stop.
make[2]: *** No rule to make target '/data/data/com.termux/files/usr/lib/libOpenCL.so', needed by 'bin/quantize-stats'.  Stop.
make[1]: *** [CMakeFiles/Makefile2:1167: tests/CMakeFiles/test-sampling.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:1303: examples/quantize-stats/CMakeFiles/quantize-stats.dir/all] Error 2
make: *** [Makefile:101: all] Error 2

To provide more context, when I use my file manager, and view system/vendor/lib64 then libOpenCL.so is available.

In termux: I navigate to system/vendor/lib64 and libOpenCL.so isn't there.

It looks like llama.cpp is looking in some other place (/data/data/com.termux/files/usr/lib/ instead of /system/vendor/lib64) for libOpenCL.so

I tried(failed) to link llama.cpp with export LD_LIBRARY_PATH=/system/vendor/lib64:$LD_LIBRARY_PATH

But I have no idea what I'm doing.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 28, 2023

Just delete the build directories in the CLBlast and llama.cpp sources and redo all the CMake stuff again.

Or maybe you can open the CMakeCache.txt and find and fix the paths there.

@ghost
Copy link
Author

ghost commented May 28, 2023

Just delete the build directories in the CLBlast and llama.cpp sources and redo all the CMake stuff again.

Or maybe you can open the CMakeCache.txt and find and fix the paths there.

Okay, I'll try these options. Somehow, I delinked my cmake compiler (again) so I'll try and sort this and let you know how it goes tomorrow.

Edit: I realized I manually installed OpenCL-Headers instead of CLBlast, so I corrected my error, but CLBlast can't find the OpenCL library without ocl-icd installed... so i have to use apt install ocl-icd

(Tried manually building, but there's no cmakelist, or make file. https://github.com/OCL-dev/ocl-icd)

Once ocl-icd auto installs, it allows me to build CLBlast, which allows me to make llama.cpp with ClBlast enabled, but then the same error when running main,

main: build = 0 (unknown)                             main: seed  = 1685298673                              ggml_opencl: clGetPlatformIDs(NPLAT, platform_ids, &n_platforms) error -1001 at /data/data/com.termux/files/home/clllama/ggml-opencl.cpp:344

I'm thinking termux can't access system/vendor/lib64 properly.

I'll try editing the cmakecache file later this evening.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented May 28, 2023

It is an ICD loader, that means CLBlast and llama.cpp or any other program that uses OpenCL is actally using the loader.
The loader is configured to search the installed platforms and devices and then what the application wants to use, it will load the actual driver.

I don't know how it works on your phone but, here on GNU/Linux there are files in /etc/OpenCL/vendors there are .icd files for each platform. The contents of the file is just some library file path.

@ghost
Copy link
Author

ghost commented May 29, 2023

It is an ICD loader, that means CLBlast and llama.cpp or any other program that uses OpenCL is actally using the loader. The loader is configured to search the installed platforms and devices and then what the application wants to use, it will load the actual driver.

I don't know how it works on your phone but, here on GNU/Linux there are files in /etc/OpenCL/vendors there are .icd files for each platform. The contents of the file is just some library file path.

Thanks for clarifying. Double checking ocl-icd, it requires root permission - which I don't have. So, ocl-icd can't enable native OpenCL.

I have no way of pathing to system/vendor/lib64.

I dunno how clinfo is even able to access the information about OpenCL. In this way, the cmakecache.txt is unclear as there's no direct path to libOpenCL.so, which is required for building CLBlast, and llama.cpp.

Anyway, thanks for trying but I don't see a simple way of making this work and ultimately, I hoped to help others get it working on their devices but the average person isn't going to do all of this.

@ghost ghost reopened this May 29, 2023
@ghost ghost closed this as completed May 29, 2023
@ghost
Copy link
Author

ghost commented Jun 6, 2023

Following up with resolution, thanks again @SlyEcho, @0cc4m

Beginning with a fresh install of Termux, install opencl-headers, opencl-clhpp, ocl-icd, clinfo.

Following @SlyEcho instructions for building CLBlast:

cd
git clone https://github.com/CNugteren/CLBlast.git
cd CLBlast
cmake -B build \
  -DBUILD_SHARED_LIBS=OFF \
  -DTUNERS=OFF \
  -DCMAKE_BUILD_TYPE=Release \
  -DCMAKE_INSTALL_PREFIX=/data/data/com.termux/files/usr
cd build
make -j8
make install

Build llama.cpp with CLBlast enabled through cmake:

cd
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp/
cmake -B build -DLLAMA_CLBLAST=ON
cd build
make -j8

Then Termux users can start ./main with..

LD_LIBRARY_PATH=/vendor/lib64:$PREFIX/lib ./main

In this way, Termux enables GPU acceleration for llama.cpp.

@ghost
Copy link
Author

ghost commented Jun 8, 2023

Hi there,

to clarify, you ran pkg install clang, cmake, ocl-icd, opencl-headers, opencl-clhpp, yes?

Ensuring OpenCL and CLBlast is properly installed, and linked is key. Based on the error message, it appears that you did not cd CLBlast after cloning the git. I'd do the following order (ensure starting in the $HOME directory with cd $HOME):

git clone https://github.com/CNugteren/CLBlast.git

Then

cd CLBlast

then

cmake -B build \
  -DBUILD_SHARED_LIBS=OFF \
  -DTUNERS=OFF \
  -DCMAKE_BUILD_TYPE=Release \
  -DCMAKE_INSTALL_PREFIX=/data/data/com.termux/files/usr

then

cd build

then

make -j8

Finally,

make install

There might be a warning about cmake depreciation, but as far as I've seen: any other warning is unacceptable and probably means there's a linking/pathing error. I had to begin with a totally fresh install of Termux because I had old pathing messing up CLBlast installation.

I will share the build folder, but it may not be compatible for your device;
build.zip

My device has Vulkan backend which isn't officially supported yet, so CLBlast has the lower performance comparatively. Openblas times around 250ms per token, and CLBlast around 350ms.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented Jun 8, 2023

CANNOT LINK EXECUTABLE "/data/data/com.termux/files/usr/libexec/git-core/git-remote-https":
   library "libssl.so.1.1" not found: needed by /data/data/com.termux/files/usr/lib/libssh2.so in namespace (default)

git is not installed correctly.

You could download the tarball from GitHub... but I would make sure that the dev environment is first set up correctly. As I don't use termux, I can't help you much, but probably you need the openssl-1.1 package.

@ghost
Copy link
Author

ghost commented Jun 8, 2023

Thank you, I'll try re-installing termux , both git and cmake are not working due to missing libraries. I wonder if my problem is I got termux from f-droid.

Fdroid is good for me. Playstore version is depreciated, not maintained, and lacks features.

Here's my Fdroid Termux setup(run each seperately):

termux-setup-storage

termux-setup-storage (I actually run this twice due to the way Termux and Android permissions work)

apt update && apt upgrade

termux-change-repo

pkg install git clang cmake make opencl-headers opencl-clhpp ocl-icd ncurses clinfo

Before installing CLBlast and llama.cpp: test OpenCL with clinfo. My Termux setup requires clinfo to access my OpenCL library like this:

LD_LIBRARY_PATH=/vendor/lib64 clinfo

@aseok
Copy link

aseok commented Jun 16, 2023

How much opencl will benefit inference speed in token per second?

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented Jun 16, 2023

How much opencl will benefit inference speed in token per second?

On a phone or iGPU? Probably not much. People have posted a lot of their testing in the issues here, usually the CPU is faster. It really is limited by the sheer size of the models, and even if the memory access could be improved for shared memory it is still a lot of computations.

On dedicated GPUs? It can get pretty close to CUDA/ROCm speed when generating tokens. Prompt evaluation is still slower because CLBlast is not as fast as the vendor BLAS routines.

But it also depends a lot on the GPU vendor (AMD, Nvidia), GPU age (GTX, RTX), Video RAM size (need a 8 GB card for 7BQ4_0), VRAM type (GDDR, HBM), OS and OpenCL driver (vendor, Mesa Clover, Mesa rusticl, clvk, etc.) that you have and can use.

@ghost
Copy link
Author

ghost commented Jun 16, 2023

and even if the memory access could be improved for shared memory it is still a lot of computation

Certainly a phone gpu is limited, but it comes down to effective syncronizing of the cpu/gpu, is that right?

I checked into the LLVM issue you had and found 2 similarly named, different things:

  1. Some compiler/tool chain for clang.
  2. LLVMpipe .. a software rasterizer that uses LLVM to do runtime code generation.

For my device, some apps allow llvm software renderering, or turnip+zink.

for example, Alexvorxx drivers increase performance over LLVMpipe

and freedreno open-source Gallium3D driver advertises OpenGL 4.6 for the A6xx series graphics

is OpenGL relevant to the way llama.cpp functions now? I haven't seen any mention of it.

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented Jun 16, 2023

Right now there is no way to use OpenGL or Vulkan in llama.cpp.

@ghost
Copy link
Author

ghost commented Jun 17, 2023

Right now there is no way to use OpenGL or Vulkan in llama.cpp.

Understood. Thank you!

@aseok
Copy link

aseok commented Jun 17, 2023

What is theoretical performance achievable on state-of-the-art mobile soc like exynos2200 or snapdragon 8 gen utilizing all resources ,i.e CPU GPU dsp, (assuming sufficient ddr5 memory available)? ~ 1.5 t/s currently reported on poco f3 or s22, is 4x speedup possible for a 7b model?

@ghost
Copy link
Author

ghost commented Jun 17, 2023

What is theoretical performance achievable on state-of-the-art mobile soc like exynos2200 or snapdragon 8 gen (assuming sufficient ddr5 memory available)? ~ 1.5 t/s currently reported on poco f3 or s22, is 3 - 4 speedup achievable for a 7b model?

With 7B models, OpenBlas print evals around 250ms, and print timings around 330ms is typical for my device (3 t/s), so I figure the devices you mentioned are faster if properly configured.

It's difficult to guess what's possible with a fully supported GPU since it's theoretical, maybe 5 t/s. It could be more, like 10 t/s, but I'm just guessing.

edit: the new t/s print is nice:

llama_print_timings:        load time =   859.46 ms
llama_print_timings:      sample time =  1254.50 ms /   535 runs   (    2.34 ms per token,   426.46 tokens per second)
llama_print_timings: prompt eval time = 106083.06 ms /   466 tokens (  227.65 ms per token,     4.39 tokens per second)
llama_print_timings:        eval time = 169735.84 ms /   537 runs   (  316.08 ms per token,     3.16 tokens per second)
llama_print_timings:       total time = 831259.84 ms

@SlyEcho
Copy link
Sponsor Collaborator

SlyEcho commented Jun 17, 2023

What is theoretical performance achievable on state-of-the-art mobile soc like exynos2200 or snapdragon 8 gen utilizing all resources ,i.e CPU GPU dsp, (assuming sufficient ddr5 memory available)?

This is impossible to answer. I guess you could estimate something with the known FLOPS performance characteristics, but llama.cpp cannot use GPU and CPU at the same time fully, and it works best if using one one single type of performance core.

For example on my Pinebook Pro (RK3399) today I tested 3B and it gets almost the same speed if I use 4 A-53 cores or 2 A-72 cores, but if I try to use all of them it is much slower. So just by only using the performance cores, most of the CPU cores are not even used.

I have an SBC with the new RK3588S as well, and this one can generate 7B on its four A-76 cores at around 3.3 t/s. Using the four Cortex A-55 cores it is 0.8 t/s, using all cores 1.3 t/s.

These newer SoC like Exynos 2200 have three types of cores, so I'm not sure which ones should be used for best performance. Won't know until someone tests it to find out.

@ghost
Copy link
Author

ghost commented Jun 19, 2023

For example on my Pinebook Pro (RK3399) today I tested 3B and it gets almost the same speed if I use 4 A-53 cores or 2 A-72 cores, but if I try to use all of them it is much slower. So just by only using the performance cores, most of the CPU cores are not even used.

It appears llama.cpp has no limit, and makes no estimate on the hardware for the system it's installed. I'm not complaining, it is what it is. In this way, it's powerful so long as one narrows the parameters for the specific device/system.

I have an SBC with the new RK3588S as well, and this one can generate 7B on its four A-76 cores at around 3.3 t/s. Using the four Cortex A-55 cores it is 0.8 t/s, using all cores 1.3 t/s.

--threads 8 is essentially full device lock for me. Termux/llama.cpp fights the Operating system for resources. It's cool that it can do that, but it's ineffecient.

--threads 5 keeps my CPU around 80-90%, but lower performance vs. the sweet-spot for my device: --threads 4 on OpenBlas.

If CLBlast built, then --threads 3 is better with the -ngl parameter. It's interesting to watch the resource monitor during inference: CPU throttles around 50-70%, and GPU starts less than 1%, spikes around 80-100% for a few seconds, then hovers around 20-30% while writing the response.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants
@SlyEcho @0cc4m @aseok @ekolawole and others