Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) on M2 #847

Open
alessandropaticchio opened this issue Oct 27, 2023 · 18 comments
Labels
bug Something isn't working build documentation Improvements or additions to documentation

Comments

@alessandropaticchio
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [X ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [X ] I carefully followed the README.md.
  • [ X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ X] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

I'm trying to launch the Llama API following the guide in https://github.com/abetlen/llama-cpp-python#:~:text=docs/install/macos.md, on my Macbook M2.

Current Behavior

Unfortunately, after running the last command, which is

python3 -m llama_cpp.server --model $MODEL --n_gpu_layers 1

I get

RuntimeError: Failed to load shared library '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib': dlopen(/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib, 0x0006): tried: '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (no such file), '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))

Any help would be much appreciated!

@steveoOn
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [X ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [X ] I carefully followed the README.md.
  • [ X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ X] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

I'm trying to launch the Llama API following the guide in https://github.com/abetlen/llama-cpp-python#:~:text=docs/install/macos.md, on my Macbook M2.

Current Behavior

Unfortunately, after running the last command, which is

python3 -m llama_cpp.server --model $MODEL --n_gpu_layers 1

I get

RuntimeError: Failed to load shared library '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib': dlopen(/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib, 0x0006): tried: '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (no such file), '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))

Any help would be much appreciated!

Add-DCMAKE_OSX_ARCHITECTURES=arm64 to CMake build, follow this up:

CMAKE_ARGS="-DLLAMA_METAL=on -DCMAKE_OSX_ARCHITECTURES=arm64"

antoine-lizee pushed a commit to antoine-lizee/llama-cpp-python that referenced this issue Oct 30, 2023
@kugg
Copy link

kugg commented Oct 30, 2023

The parent Python process runs as x86_64, but the loaded shared object is in arm64e or arm64.
When I initially encountered this problem, I was using this Python interpreter on my m2:

$ file $(which python3)
/usr/local/bin/python3: Mach-O 64-bit executable x86_64

There was also the default MacOS version:

$ file /usr/bin/python3
/usr/bin/python3: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e]
/usr/bin/python3 (for architecture x86_64):     Mach-O 64-bit executable x86_64
/usr/bin/python3 (for architecture arm64e):     Mach-O 64-bit executable arm64e

When a binary is compiled with two architectures, you must select the architecture to execute using the "arch" command.

arch -arm64e /usr/bin/python3

If you run into this problem, first check which architecture your parent Python interpreter is compiled for:

file $(which python3)

Then, install a Python interpreter into your environment that matches the shared object (.so) architecture that ctypes attempts to import.

@muzhig
Copy link

muzhig commented Nov 9, 2023

@steveoOn last version this worked on was 0.2.13, 0.2.14 fails to compile.
@kugg in my case I have only arm64 binary and still face same error. When llama_cpp is being built, CMAKE decides to use x86
@alessandropaticchio why is this issue closed?
The issue is still actual and moreover, the workaround @steveoOn suggested stopped working.

@muzhig
Copy link

muzhig commented Nov 9, 2023

Current state with 0.2.15:

(.venv) potapov-m1:ai muzhig$ CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python
Using pip 23.3.1 from /Users/muzhig/PycharmProjects/ai/.venv/lib/python3.10/site-packages/pip (python 3.10)
Collecting llama-cpp-python
  Downloading llama_cpp_python-0.2.15.tar.gz (7.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.7/7.7 MB 11.1 MB/s eta 0:00:00
  Running command pip subprocess to install build dependencies
  Collecting scikit-build-core>=0.5.1 (from scikit-build-core[pyproject]>=0.5.1)
    Using cached scikit_build_core-0.6.1-py3-none-any.whl.metadata (17 kB)
  Collecting exceptiongroup (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1)
    Using cached exceptiongroup-1.1.3-py3-none-any.whl.metadata (6.1 kB)
  Collecting packaging>=20.9 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1)
    Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
  Collecting tomli>=1.1 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1)
    Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
  Collecting pathspec>=0.10.1 (from scikit-build-core[pyproject]>=0.5.1)
    Using cached pathspec-0.11.2-py3-none-any.whl.metadata (19 kB)
  Collecting pyproject-metadata>=0.5 (from scikit-build-core[pyproject]>=0.5.1)
    Using cached pyproject_metadata-0.7.1-py3-none-any.whl (7.4 kB)
  Using cached scikit_build_core-0.6.1-py3-none-any.whl (134 kB)
  Using cached packaging-23.2-py3-none-any.whl (53 kB)
  Using cached pathspec-0.11.2-py3-none-any.whl (29 kB)
  Using cached exceptiongroup-1.1.3-py3-none-any.whl (14 kB)
  Installing collected packages: tomli, pathspec, packaging, exceptiongroup, scikit-build-core, pyproject-metadata
  Successfully installed exceptiongroup-1.1.3 packaging-23.2 pathspec-0.11.2 pyproject-metadata-0.7.1 scikit-build-core-0.6.1 tomli-2.0.1
  Installing build dependencies ... done
  Running command Getting requirements to build wheel
  Getting requirements to build wheel ... done
  Running command pip subprocess to install backend dependencies
  Collecting ninja>=1.5
    Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (5.3 kB)
  Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl (270 kB)
  Installing collected packages: ninja
  Successfully installed ninja-1.11.1.1
  Installing backend dependencies ... done
  Running command Preparing metadata (pyproject.toml)
  *** scikit-build-core 0.6.1 using CMake 3.26.2 (metadata_wheel)
  Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
  Obtaining dependency information for typing-extensions>=4.5.0 from https://files.pythonhosted.org/packages/24/21/7d397a4b7934ff4028987914ac1044d3b7d52712f30e2ac7a2ae5bc86dd0/typing_extensions-4.8.0-py3-none-any.whl.metadata
  Downloading typing_extensions-4.8.0-py3-none-any.whl.metadata (3.0 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
  Obtaining dependency information for numpy>=1.20.0 from https://files.pythonhosted.org/packages/e3/63/fd76159cb76c682171e3bf50ed0ee8704103035a9347684a2ec0914b84a1/numpy-1.26.1-cp310-cp310-macosx_11_0_arm64.whl.metadata
  Downloading numpy-1.26.1-cp310-cp310-macosx_11_0_arm64.whl.metadata (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB 275.4 MB/s eta 0:00:00
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata
  Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 214.2 MB/s eta 0:00:00
Downloading numpy-1.26.1-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 18.9 MB/s eta 0:00:00
Downloading typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Building wheels for collected packages: llama-cpp-python
  Running command Building wheel for llama-cpp-python (pyproject.toml)
  *** scikit-build-core 0.6.1 using CMake 3.26.2 (wheel)
  *** Configuring CMake...
  loading initial cache file /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmpp3_873f3/build/CMakeInit.txt
  -- The C compiler identification is AppleClang 14.0.3.14030022
  -- The CXX compiler identification is AppleClang 14.0.3.14030022
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
  -- Found Threads: TRUE
  -- Accelerate framework found
  -- Metal framework found
  -- CMAKE_SYSTEM_PROCESSOR: x86_64
  -- x86 detected
  CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:731 (install):
    Target llama has RESOURCE files but no RESOURCE DESTINATION.
  This warning is for project developers.  Use -Wno-dev to suppress it.

  CMake Warning (dev) at CMakeLists.txt:18 (install):
    Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  This warning is for project developers.  Use -Wno-dev to suppress it.

  CMake Warning (dev) at CMakeLists.txt:27 (install):
    Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  This warning is for project developers.  Use -Wno-dev to suppress it.

  -- Configuring done (4.1s)
  -- Generating done (0.0s)
  -- Build files have been written to: /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmpp3_873f3/build
  *** Building project with Ninja...
  [1/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/llama.cpp
  FAILED: vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
  /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/llama.cpp
  clang: error: the clang compiler does not support '-march=native'
  [2/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml-metal.m
  FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o
  /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml-metal.m
  clang: error: the clang compiler does not support '-march=native'
  [3/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml.c
  FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
  /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml.c
  clang: error: the clang compiler does not support '-march=native'
  [4/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml-quants.c
  FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o
  /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml-quants.c
  clang: error: the clang compiler does not support '-march=native'
  [5/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml-alloc.c
  FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o
  /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml-alloc.c
  clang: error: the clang compiler does not support '-march=native'
  [6/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml-backend.c
  FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o
  /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/ggml-backend.c
  clang: error: the clang compiler does not support '-march=native'
  [7/23] cd /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp && /usr/local/Cellar/cmake/3.26.2/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=14.0.3.14030022 -DCMAKE_C_COMPILER_ID=AppleClang -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -P /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/common/../scripts/build-info.cmake
  -- Found Git: /usr/bin/git (found version "2.39.2 (Apple Git-143)")
  [8/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/examples/llava/. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/examples/llava/llava.cpp
  [9/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/examples/llava/. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534/vendor/llama.cpp/examples/llava/clip.cpp
  ninja: build stopped: subcommand failed.

  *** CMake build failed
  error: subprocess-exited-with-error
  
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: /Users/muzhig/PycharmProjects/ai/.venv/bin/python /Users/muzhig/PycharmProjects/ai/.venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmponng6h66
  cwd: /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-6hf62331/llama-cpp-python_e29ecd08248b4cd0a1cea8e98565d534
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

@abetlen
Copy link
Owner

abetlen commented Nov 10, 2023

@muzhig sorry to hear this stopped working.

Just speculating on what could have caused this to regress, can you try compiling with -DLLAMA_NATIVE=ON flag?

@muzhig
Copy link

muzhig commented Nov 14, 2023

checking arch
muzhig$ arch
arm64
(0.2.17) latest version with previously working hotfix

muzhig$ CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python
Using pip 23.3.1 from /Users/muzhig/PycharmProjects/ai/.venv/lib/python3.10/site-packages/pip (python 3.10)
Collecting llama-cpp-python
Downloading llama_cpp_python-0.2.17.tar.gz (7.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.8/7.8 MB 9.9 MB/s eta 0:00:00
Running command pip subprocess to install build dependencies
Collecting scikit-build-core>=0.5.1 (from scikit-build-core[pyproject]>=0.5.1)
Using cached scikit_build_core-0.6.1-py3-none-any.whl.metadata (17 kB)
Collecting exceptiongroup (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1)
Using cached exceptiongroup-1.1.3-py3-none-any.whl.metadata (6.1 kB)
Collecting packaging>=20.9 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1)
Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Collecting tomli>=1.1 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1)
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting pathspec>=0.10.1 (from scikit-build-core[pyproject]>=0.5.1)
Using cached pathspec-0.11.2-py3-none-any.whl.metadata (19 kB)
Collecting pyproject-metadata>=0.5 (from scikit-build-core[pyproject]>=0.5.1)
Using cached pyproject_metadata-0.7.1-py3-none-any.whl (7.4 kB)
Using cached scikit_build_core-0.6.1-py3-none-any.whl (134 kB)
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Using cached pathspec-0.11.2-py3-none-any.whl (29 kB)
Using cached exceptiongroup-1.1.3-py3-none-any.whl (14 kB)
Installing collected packages: tomli, pathspec, packaging, exceptiongroup, scikit-build-core, pyproject-metadata
Successfully installed exceptiongroup-1.1.3 packaging-23.2 pathspec-0.11.2 pyproject-metadata-0.7.1 scikit-build-core-0.6.1 tomli-2.0.1
Installing build dependencies ... done
Running command Getting requirements to build wheel
Getting requirements to build wheel ... done
Running command pip subprocess to install backend dependencies
Collecting ninja>=1.5
Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (5.3 kB)
Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl (270 kB)
Installing collected packages: ninja
Successfully installed ninja-1.11.1.1
Installing backend dependencies ... done
Running command Preparing metadata (pyproject.toml)
*** scikit-build-core 0.6.1 using CMake 3.26.2 (metadata_wheel)
Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
Obtaining dependency information for typing-extensions>=4.5.0 from https://files.pythonhosted.org/packages/24/21/7d397a4b7934ff4028987914ac1044d3b7d52712f30e2ac7a2ae5bc86dd0/typing_extensions-4.8.0-py3-none-any.whl.metadata
Downloading typing_extensions-4.8.0-py3-none-any.whl.metadata (3.0 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
Obtaining dependency information for numpy>=1.20.0 from https://files.pythonhosted.org/packages/2f/ac/be1f2767b7222347d2fefc18d8d58e9febfd9919190cc6fbd8a4d22d6eab/numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl.metadata
Downloading numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB 138.3 MB/s eta 0:00:00
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata
Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 75.4 MB/s eta 0:00:00
Downloading numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 17.1 MB/s eta 0:00:00
Downloading typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Building wheels for collected packages: llama-cpp-python
Running command Building wheel for llama-cpp-python (pyproject.toml)
*** scikit-build-core 0.6.1 using CMake 3.26.2 (wheel)
*** Configuring CMake...
loading initial cache file /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmp60az66ug/build/CMakeInit.txt
-- The C compiler identification is AppleClang 14.0.3.14030022
-- The CXX compiler identification is AppleClang 14.0.3.14030022
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Accelerate framework found
-- Metal framework found
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:731 (install):
Target llama has RESOURCE files but no RESOURCE DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:20 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:29 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.

-- Configuring done (10.0s)
-- Generating done (0.0s)
-- Build files have been written to: /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmp60az66ug/build
*** Building project with Ninja...
[1/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml-alloc.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml-alloc.c
clang: error: the clang compiler does not support '-march=native'
[2/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml-backend.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml-backend.c
clang: error: the clang compiler does not support '-march=native'
[3/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml-metal.m
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml-metal.m
clang: error: the clang compiler does not support '-march=native'
[4/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml.c
clang: error: the clang compiler does not support '-march=native'
[5/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/llama.cpp
FAILED: vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/llama.cpp
clang: error: the clang compiler does not support '-march=native'
[6/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml-quants.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/ggml-quants.c
clang: error: the clang compiler does not support '-march=native'
[7/23] cd /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp && /usr/local/Cellar/cmake/3.26.2/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=14.0.3.14030022 -DCMAKE_C_COMPILER_ID=AppleClang -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -P /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/common/../scripts/build-info.cmake
-- Found Git: /usr/bin/git (found version "2.39.2 (Apple Git-143)")
[8/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/examples/llava/. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/examples/llava/llava.cpp
[9/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/examples/llava/. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c/vendor/llama.cpp/examples/llava/clip.cpp
ninja: build stopped: subcommand failed.

*** CMake build failed
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /Users/muzhig/PycharmProjects/ai/.venv/bin/python /Users/muzhig/PycharmProjects/ai/.venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmpqvqr2mlt
cwd: /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-9jh8n39c/llama-cpp-python_26110d8d97ab4239923c8f3c0103766c
Building wheel for llama-cpp-python (pyproject.toml) ... error
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

-DLLAMA_NATIVE=ON muzhig$ CMAKE_ARGS="-DLLAMA_NATIVE=ON -DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python Using pip 23.3.1 from /Users/muzhig/PycharmProjects/ai/.venv/lib/python3.10/site-packages/pip (python 3.10) Collecting llama-cpp-python Downloading llama_cpp_python-0.2.17.tar.gz (7.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.8/7.8 MB 10.1 MB/s eta 0:00:00 Running command pip subprocess to install build dependencies Collecting scikit-build-core>=0.5.1 (from scikit-build-core[pyproject]>=0.5.1) Using cached scikit_build_core-0.6.1-py3-none-any.whl.metadata (17 kB) Collecting exceptiongroup (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1) Using cached exceptiongroup-1.1.3-py3-none-any.whl.metadata (6.1 kB) Collecting packaging>=20.9 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1) Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB) Collecting tomli>=1.1 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1) Using cached tomli-2.0.1-py3-none-any.whl (12 kB) Collecting pathspec>=0.10.1 (from scikit-build-core[pyproject]>=0.5.1) Using cached pathspec-0.11.2-py3-none-any.whl.metadata (19 kB) Collecting pyproject-metadata>=0.5 (from scikit-build-core[pyproject]>=0.5.1) Using cached pyproject_metadata-0.7.1-py3-none-any.whl (7.4 kB) Using cached scikit_build_core-0.6.1-py3-none-any.whl (134 kB) Using cached packaging-23.2-py3-none-any.whl (53 kB) Using cached pathspec-0.11.2-py3-none-any.whl (29 kB) Using cached exceptiongroup-1.1.3-py3-none-any.whl (14 kB) Installing collected packages: tomli, pathspec, packaging, exceptiongroup, scikit-build-core, pyproject-metadata Successfully installed exceptiongroup-1.1.3 packaging-23.2 pathspec-0.11.2 pyproject-metadata-0.7.1 scikit-build-core-0.6.1 tomli-2.0.1 Installing build dependencies ... done Running command Getting requirements to build wheel Getting requirements to build wheel ... done Running command pip subprocess to install backend dependencies Collecting ninja>=1.5 Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (5.3 kB) Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl (270 kB) Installing collected packages: ninja Successfully installed ninja-1.11.1.1 Installing backend dependencies ... done Running command Preparing metadata (pyproject.toml) *** scikit-build-core 0.6.1 using CMake 3.26.2 (metadata_wheel) Preparing metadata (pyproject.toml) ... done Collecting typing-extensions>=4.5.0 (from llama-cpp-python) Obtaining dependency information for typing-extensions>=4.5.0 from https://files.pythonhosted.org/packages/24/21/7d397a4b7934ff4028987914ac1044d3b7d52712f30e2ac7a2ae5bc86dd0/typing_extensions-4.8.0-py3-none-any.whl.metadata Downloading typing_extensions-4.8.0-py3-none-any.whl.metadata (3.0 kB) Collecting numpy>=1.20.0 (from llama-cpp-python) Obtaining dependency information for numpy>=1.20.0 from https://files.pythonhosted.org/packages/2f/ac/be1f2767b7222347d2fefc18d8d58e9febfd9919190cc6fbd8a4d22d6eab/numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl.metadata Downloading numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (61 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB 176.1 MB/s eta 0:00:00 Collecting diskcache>=5.6.1 (from llama-cpp-python) Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Downloading diskcache-5.6.3-py3-none-any.whl (45 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 80.5 MB/s eta 0:00:00 Downloading numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 16.5 MB/s eta 0:00:00 Downloading typing_extensions-4.8.0-py3-none-any.whl (31 kB) Building wheels for collected packages: llama-cpp-python Running command Building wheel for llama-cpp-python (pyproject.toml) *** scikit-build-core 0.6.1 using CMake 3.26.2 (wheel) *** Configuring CMake... loading initial cache file /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmpmel7qso6/build/CMakeInit.txt -- The C compiler identification is AppleClang 14.0.3.14030022 -- The CXX compiler identification is AppleClang 14.0.3.14030022 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Accelerate framework found -- Metal framework found -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:731 (install): Target llama has RESOURCE files but no RESOURCE DESTINATION. This warning is for project developers. Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:20 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:29 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.

-- Configuring done (1.4s)
-- Generating done (0.0s)
-- Build files have been written to: /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmpmel7qso6/build
*** Building project with Ninja...
[1/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml.c
clang: error: the clang compiler does not support '-march=native'
[2/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml-quants.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml-quants.c
clang: error: the clang compiler does not support '-march=native'
[3/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml-backend.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml-backend.c
clang: error: the clang compiler does not support '-march=native'
[4/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml-alloc.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml-alloc.c
clang: error: the clang compiler does not support '-march=native'
[5/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml-metal.m
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/ggml-metal.m
clang: error: the clang compiler does not support '-march=native'
[6/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/llama.cpp
FAILED: vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/llama.cpp
clang: error: the clang compiler does not support '-march=native'
[7/23] cd /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp && /usr/local/Cellar/cmake/3.26.2/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=14.0.3.14030022 -DCMAKE_C_COMPILER_ID=AppleClang -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -P /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/common/../scripts/build-info.cmake
-- Found Git: /usr/bin/git (found version "2.39.2 (Apple Git-143)")
[8/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/examples/llava/. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/examples/llava/llava.cpp
[9/23] /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/examples/llava/. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3/vendor/llama.cpp/examples/llava/clip.cpp
ninja: build stopped: subcommand failed.

*** CMake build failed
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /Users/muzhig/PycharmProjects/ai/.venv/bin/python /Users/muzhig/PycharmProjects/ai/.venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmpmq3tqonz
cwd: /private/var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/pip-install-fjd5958j/llama-cpp-python_39d34505f0654f52856da65d4f7b9fe3
Building wheel for llama-cpp-python (pyproject.toml) ... error
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

No luck :(
I think the key is CMake config saying it detected x86:

  -- CMAKE_SYSTEM_PROCESSOR: x86_64
  -- x86 detected

@YAY-3M-TA3
Copy link

YAY-3M-TA3 commented Nov 17, 2023

checking arch
(0.2.17) latest version with previously working hotfix
-DLLAMA_NATIVE=ON
No luck :( I think the key is CMake config saying it detected x86:

  -- CMAKE_SYSTEM_PROCESSOR: x86_64
  -- x86 detected

Probably has something to do with llava1.5 multi-modal support being added to 0.2.14 (aab74f0) - since a second lib has to be built (libllava.dylib) - maybe this is why @steveoOn 's fix no longer works...?

@YAY-3M-TA3
Copy link

Work around for now:
(I believe the issue maybe with virtual environments - I was using conda).

Working installation for bakllava:

python3 -m venv source venv/bin/activate
pip install py-llm-core

Then download your models -
grab both
ggml-model-q5_k.gguf
mmproj-model-f16.gguf

Place these where ever (you'll reference the path in your .py)

There is a bug in the llama_cpp module: llama_chat_format.py
change line 966 from
llama.eval(llama.tokenize(system_prompt.encode("utf8"), add_bos=True))

to

llama.eval(llama.tokenize((' '.join([e['text'] for e in system_prompt])).encode('UTF-8'), add_bos=True))

@AhmadBaracat
Copy link

@YAY-3M-TA3 comment + using a previous version (0.2.11) worked for me.

!CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python==0.2.11

@longzhang
Copy link

I try on use the old version , It works . But it runs too slow, Maybe Can not use the arm arch
CMAKE_ARGS="-DLLAMA_METAL=on -DCMAKE_OSX_ARCHITECTURES=arm64" pip install --upgrade --force-reinstall llama-cpp-python==v0.1.85 --no-cache-dir

@fozziethebeat
Copy link

I too am having problems with this using Apple M2 and OSX version 14.1.2 (23B92). Some details if it helps others:

Arch
(llama) ~ ➤ arch
arm64
Conda
(llama) ~ ➤ conda info

     active environment : llama
    active env location : /Users/fozziethebeat/anaconda3/envs/llama
            shell level : 1
       user config file : /Users/fozziethebeat/.condarc
 populated config files : /Users/fozziethebeat/.condarc
          conda version : 23.7.4
    conda-build version : 3.26.1
         python version : 3.11.5.final.0
       virtual packages : __archspec=1=arm64
                          __osx=14.1.2=0
                          __unix=0=0
       base environment : /Users/fozziethebeat/anaconda3  (writable)
      conda av data dir : /Users/fozziethebeat/anaconda3/etc/conda
  conda av metadata url : None
           channel URLs : https://repo.anaconda.com/pkgs/main/osx-arm64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/osx-arm64
                          https://repo.anaconda.com/pkgs/r/noarch
          package cache : /Users/fozziethebeat/anaconda3/pkgs
                          /Users/fozziethebeat/.conda/pkgs
       envs directories : /Users/fozziethebeat/anaconda3/envs
                          /Users/fozziethebeat/.conda/envs
               platform : osx-arm64
             user-agent : conda/23.7.4 requests/2.31.0 CPython/3.11.5 Darwin/23.1.0 OSX/14.1.2 aau/0.4.2 c/04QvfUPEw4ylU87j3NEdGw s/z6SZBnNd3JF988yxq7fzoQ e/8NeTeH7rTaxso2UfOVXsxw
                UID:GID : 501:20
             netrc file : None
           offline mode : False
Which Python
(llama) ~ ➤ file $(which python3)
/Users/fozziethebeat/anaconda3/envs/llama/bin/python3: Mach-O 64-bit executable arm64
(llama) ~ ➤ file /usr/bin/python3
/usr/bin/python3: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e]
/usr/bin/python3 (for architecture x86_64):	Mach-O 64-bit executable x86_64
/usr/bin/python3 (for architecture arm64e):	Mach-O 64-bit executable arm64e
Working install
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python==0.2.13

This got me to a final run of the server with

ggml_metal_init: GPU name:   Apple M2
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
Failed Install
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python==0.2.14

This led to eventually

clang: error: unsupported argument 'native' to option '-march='
  [7/17] cd /private/var/folders/h1/v4wn7py50kj2ntk27xvc0vjm0000gn/T/pip-install-3qkvzxq1/llama-cpp-python_8fd30f7c2244404b8a8225a8a98d18c1/vendor/llama.cpp && /usr/local/Cellar/cmake/3.27.1/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=15.0.0.15000040 -DCMAKE_C_COMPILER_ID=AppleClang -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -P /private/var/folders/h1/v4wn7py50kj2ntk27xvc0vjm0000gn/T/pip-install-3qkvzxq1/llama-cpp-python_8fd30f7c2244404b8a8225a8a98d18c1/vendor/llama.cpp/common/../scripts/build-info.cmake

I managed to install llama.cpp manually with no problem using make (no special changes). So I'm guessing this is something weird about cmake that I am not familiar with.

i've tried later versions and at HEAD but nothing past 0.2.13 was able to compile.

@fozziethebeat
Copy link

Okay, I did some hacking to the CMakeLists.txt file and found that by adding

    set(LLAMA_NATIVE "Off" CACHE BOOL "llama: disable march native" FORCE)

somewhere in it, I got everything to install properly. I did some investigating and discovered that for some reason my xcode toolchain doesn't support the --march option. Version 0.2.13 doesn't trigger anything since it never tries to add that option when calling clang. I still want to investigate why my xcode clang doesn't have this option. (note, my locally installed clang does have this flag).

@abetlen
Copy link
Owner

abetlen commented Dec 22, 2023

@fozziethebeat around line 18 I guess, does setting it before pip installing work as well ie CMAKE_ARGS="-DLLAMA_NATIVE=OFF"

@abetlen abetlen added bug Something isn't working build documentation Improvements or additions to documentation labels Dec 22, 2023
@fozziethebeat
Copy link

That looks like it did it. In a new environment I ran

CMAKE_ARGS="-DLLAMA_NATIVE=OFF" pip install llama-cpp-python

And everything worked with great success

@DuongNg2911
Copy link

DuongNg2911 commented Dec 24, 2023

@YAY-3M-TA3 comment + using a previous version (0.2.11) worked for me.

!CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python==0.2.11

It worked for me, thanks a lot!
Notes: I am using M1 chip

@muzhig
Copy link

muzhig commented Dec 26, 2023

Here is what worked for me after all (with latest version):

CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DCMAKE_APPLE_SILICON_PROCESSOR=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python

I started digging from the logline:

  -- CMAKE_SYSTEM_PROCESSOR: x86_64
  -- x86 detected

and stumbled upon this thread https://discourse.cmake.org/t/macos-cmake-detect-system-processor-incorrectly-on-apple-silicon/5129/5

@abetlen probably makes sense to add this in the troubleshooting / README?

@khaledmsm
Copy link

Here is what worked for me after all (with latest version):

CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DCMAKE_APPLE_SILICON_PROCESSOR=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python

I started digging from the logline:

  -- CMAKE_SYSTEM_PROCESSOR: x86_64
  -- x86 detected

and stumbled upon this thread https://discourse.cmake.org/t/macos-cmake-detect-system-processor-incorrectly-on-apple-silicon/5129/5

@abetlen probably makes sense to add this in the troubleshooting / README?

in which path you write it ?

@berkeleymalagon
Copy link

CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python==0.2.11

this fixed it for me - epic, thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working build documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests