### Name and Version build: 3983 (8841ce3f) Introduced via https://github.com/ggerganov/llama.cpp/pull/9684 ### Operating systems _No response_ ### Which llama.cpp modules do you know to be affected? _No response_ ### Problem description & steps to reproduce On Arm CPU with i8mm feature platform. In ggml.c default HWCAP2_I8MM as default 0 should be improved. #if !defined(HWCAP2_I8MM) #define HWCAP2_I8MM 0 #endif HWCAP2_I8MM should be defined at following value instead, #define HWCAP2_I8MM (1 << 13) Refer to https://github.com/torvalds/linux/blob/master/arch/arm64/include/uapi/asm/hwcap.h#L76C1-L76C31 This lead to ggml_arm_arch_features.has_i8mm initialized as 0 instead of 1 when i8mm present in lscpu's output. ### First Bad Commit _No response_ ### Relevant log output _No response_