New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) on M2 #847
Comments
Add
|
The parent Python process runs as x86_64, but the loaded shared object is in arm64e or arm64.
There was also the default MacOS version:
When a binary is compiled with two architectures, you must select the architecture to execute using the "arch" command.
If you run into this problem, first check which architecture your parent Python interpreter is compiled for:
Then, install a Python interpreter into your environment that matches the shared object (.so) architecture that ctypes attempts to import. |
@steveoOn last version this worked on was |
Current state with
|
@muzhig sorry to hear this stopped working. Just speculating on what could have caused this to regress, can you try compiling with |
checking arch
(0.2.17) latest version with previously working hotfixmuzhig$ CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python CMake Warning (dev) at CMakeLists.txt:20 (install): CMake Warning (dev) at CMakeLists.txt:29 (install): -- Configuring done (10.0s) *** CMake build failed × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. -DLLAMA_NATIVE=ONmuzhig$ CMAKE_ARGS="-DLLAMA_NATIVE=ON -DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python Using pip 23.3.1 from /Users/muzhig/PycharmProjects/ai/.venv/lib/python3.10/site-packages/pip (python 3.10) Collecting llama-cpp-python Downloading llama_cpp_python-0.2.17.tar.gz (7.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.8/7.8 MB 10.1 MB/s eta 0:00:00 Running command pip subprocess to install build dependencies Collecting scikit-build-core>=0.5.1 (from scikit-build-core[pyproject]>=0.5.1) Using cached scikit_build_core-0.6.1-py3-none-any.whl.metadata (17 kB) Collecting exceptiongroup (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1) Using cached exceptiongroup-1.1.3-py3-none-any.whl.metadata (6.1 kB) Collecting packaging>=20.9 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1) Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB) Collecting tomli>=1.1 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1) Using cached tomli-2.0.1-py3-none-any.whl (12 kB) Collecting pathspec>=0.10.1 (from scikit-build-core[pyproject]>=0.5.1) Using cached pathspec-0.11.2-py3-none-any.whl.metadata (19 kB) Collecting pyproject-metadata>=0.5 (from scikit-build-core[pyproject]>=0.5.1) Using cached pyproject_metadata-0.7.1-py3-none-any.whl (7.4 kB) Using cached scikit_build_core-0.6.1-py3-none-any.whl (134 kB) Using cached packaging-23.2-py3-none-any.whl (53 kB) Using cached pathspec-0.11.2-py3-none-any.whl (29 kB) Using cached exceptiongroup-1.1.3-py3-none-any.whl (14 kB) Installing collected packages: tomli, pathspec, packaging, exceptiongroup, scikit-build-core, pyproject-metadata Successfully installed exceptiongroup-1.1.3 packaging-23.2 pathspec-0.11.2 pyproject-metadata-0.7.1 scikit-build-core-0.6.1 tomli-2.0.1 Installing build dependencies ... done Running command Getting requirements to build wheel Getting requirements to build wheel ... done Running command pip subprocess to install backend dependencies Collecting ninja>=1.5 Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (5.3 kB) Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl (270 kB) Installing collected packages: ninja Successfully installed ninja-1.11.1.1 Installing backend dependencies ... done Running command Preparing metadata (pyproject.toml) *** scikit-build-core 0.6.1 using CMake 3.26.2 (metadata_wheel) Preparing metadata (pyproject.toml) ... done Collecting typing-extensions>=4.5.0 (from llama-cpp-python) Obtaining dependency information for typing-extensions>=4.5.0 from https://files.pythonhosted.org/packages/24/21/7d397a4b7934ff4028987914ac1044d3b7d52712f30e2ac7a2ae5bc86dd0/typing_extensions-4.8.0-py3-none-any.whl.metadata Downloading typing_extensions-4.8.0-py3-none-any.whl.metadata (3.0 kB) Collecting numpy>=1.20.0 (from llama-cpp-python) Obtaining dependency information for numpy>=1.20.0 from https://files.pythonhosted.org/packages/2f/ac/be1f2767b7222347d2fefc18d8d58e9febfd9919190cc6fbd8a4d22d6eab/numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl.metadata Downloading numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (61 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB 176.1 MB/s eta 0:00:00 Collecting diskcache>=5.6.1 (from llama-cpp-python) Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Downloading diskcache-5.6.3-py3-none-any.whl (45 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 80.5 MB/s eta 0:00:00 Downloading numpy-1.26.2-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 16.5 MB/s eta 0:00:00 Downloading typing_extensions-4.8.0-py3-none-any.whl (31 kB) Building wheels for collected packages: llama-cpp-python Running command Building wheel for llama-cpp-python (pyproject.toml) *** scikit-build-core 0.6.1 using CMake 3.26.2 (wheel) *** Configuring CMake... loading initial cache file /var/folders/x2/ss9myv0d7wjdp8z7c69xpyq40000gn/T/tmpmel7qso6/build/CMakeInit.txt -- The C compiler identification is AppleClang 14.0.3.14030022 -- The CXX compiler identification is AppleClang 14.0.3.14030022 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Accelerate framework found -- Metal framework found -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:731 (install): Target llama has RESOURCE files but no RESOURCE DESTINATION. This warning is for project developers. Use -Wno-dev to suppress it.CMake Warning (dev) at CMakeLists.txt:20 (install): CMake Warning (dev) at CMakeLists.txt:29 (install): -- Configuring done (1.4s) *** CMake build failed × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. No luck :(
|
Probably has something to do with llava1.5 multi-modal support being added to 0.2.14 (aab74f0) - since a second lib has to be built (libllava.dylib) - maybe this is why @steveoOn 's fix no longer works...? |
Work around for now: Working installation for bakllava:
Then download your models - Place these where ever (you'll reference the path in your .py) There is a bug in the llama_cpp module: llama_chat_format.py to
|
@YAY-3M-TA3 comment + using a previous version (0.2.11) worked for me.
|
I try on use the old version , It works . But it runs too slow, Maybe Can not use the arm arch |
I too am having problems with this using Apple M2 and OSX version 14.1.2 (23B92). Some details if it helps others: Arch
Conda
Which Python
Working install
This got me to a final run of the server with
Failed Install
This led to eventually
I managed to install llama.cpp manually with no problem using make (no special changes). So I'm guessing this is something weird about cmake that I am not familiar with. i've tried later versions and at HEAD but nothing past 0.2.13 was able to compile. |
Okay, I did some hacking to the
somewhere in it, I got everything to install properly. I did some investigating and discovered that for some reason my xcode toolchain doesn't support the |
@fozziethebeat around line 18 I guess, does setting it before pip installing work as well ie |
That looks like it did it. In a new environment I ran CMAKE_ARGS="-DLLAMA_NATIVE=OFF" pip install llama-cpp-python And everything worked with great success |
It worked for me, thanks a lot! |
Here is what worked for me after all (with latest version):
I started digging from the logline:
and stumbled upon this thread https://discourse.cmake.org/t/macos-cmake-detect-system-processor-incorrectly-on-apple-silicon/5129/5 @abetlen probably makes sense to add this in the troubleshooting / README? |
in which path you write it ? |
this fixed it for me - epic, thank you |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
I'm trying to launch the Llama API following the guide in https://github.com/abetlen/llama-cpp-python#:~:text=docs/install/macos.md, on my Macbook M2.
Current Behavior
Unfortunately, after running the last command, which is
python3 -m llama_cpp.server --model $MODEL --n_gpu_layers 1
I get
RuntimeError: Failed to load shared library '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib': dlopen(/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib, 0x0006): tried: '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (no such file), '/Users/alessandropaticchio/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))
Any help would be much appreciated!
The text was updated successfully, but these errors were encountered: