Skip to content

ARM64 Installation Issue with llama-cpp-python on Apple Silicon Macs for interpreter --local #503

@gavinmclelland

Description

@gavinmclelland

Describe the bug

When following the "Code-Llama on MacOS (Apple Silicon)" steps as described in the MACOS.md, the llama-cpp-python library installs as an x86_64 version instead of ARM64 on an Apple Silicon machine.

Reproduce

  1. Navigate to the MACOS.md guide.
  2. Follow the steps under "Code-Llama on MacOS (Apple Silicon)."
  3. Execute the command lipo -info [/path/to/]libllama.dylib.
  4. Non-fat file: /Users/[user]/miniconda3/envs/openinterpreter/lib/python3.11/site-packages/llama_cpp/libllama.dylib is architecture: x86_64

Expected behavior

After following the installation steps, I expected the libllama.dylib file to be of the arm64 architecture. Upon running:

interpreter --local

I anticipated that after the model is set, the Local LLM interface package would be found automatically. I did not expect to be prompted to install llama-cpp-python again.

Screenshots

Non-fat file llama_cpp libllama dylib is architecture x86_64

Local_LLM_interfacePackageNotFound_Install_llama-cpp-python

Open Interpreter version

0.1.5

Python version

Python 3.11.4

Operating System name and version

macOS 13.5.1

Additional context

The issue created by following the MACOS.md guide was resolved by running the following commands:

pip uninstall llama-cpp-python
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir

llama_cpp_python arm64 metal support

issue resolved through CMAKE args

Metadata

Metadata

Assignees

No one assigned

    Labels

    BugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions