I built a pre-compiled manylinux wheel for llama_cpp_python that includes all necessary native shared libraries (e.g., libllama.so, libggml-cpu.so, etc.), so users can install it without needing to build the project from source.
This is intended as a community contribution to help developers who struggle with the native build process.
📥 Download the wheel
You can download the wheel from my GitHub release here:
🔗 https://github.com/mrzeeshanahmed/llama-cpp-python/releases/tag/v0.3.17-manylinux-x86_64
📌 Supported Environment
✔ Linux x86_64
✔ Python 3.10
✔ CPU only (OpenBLAS + OpenMP backend)
This Moth#r F@cker took 8 hours of my life and taught me a lot of things I did not know. Please show some form of appreciation