-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Closed
Labels
BugSomething isn't workingSomething isn't working
Description
Describe the bug
When following the "Code-Llama on MacOS (Apple Silicon)" steps as described in the MACOS.md, the llama-cpp-python library installs as an x86_64 version instead of ARM64 on an Apple Silicon machine.
Reproduce
- Navigate to the MACOS.md guide.
- Follow the steps under "Code-Llama on MacOS (Apple Silicon)."
- Execute the command
lipo -info [/path/to/]libllama.dylib. - Non-fat file: /Users/[user]/miniconda3/envs/openinterpreter/lib/python3.11/site-packages/llama_cpp/libllama.dylib is architecture: x86_64
Expected behavior
After following the installation steps, I expected the libllama.dylib file to be of the arm64 architecture. Upon running:
interpreter --localI anticipated that after the model is set, the Local LLM interface package would be found automatically. I did not expect to be prompted to install llama-cpp-python again.
Screenshots
Open Interpreter version
0.1.5
Python version
Python 3.11.4
Operating System name and version
macOS 13.5.1
Additional context
The issue created by following the MACOS.md guide was resolved by running the following commands:
pip uninstall llama-cpp-python
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dirMetadata
Metadata
Assignees
Labels
BugSomething isn't workingSomething isn't working



