Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects #1584

Open
lvsh2012 opened this issue Feb 7, 2024 · 3 comments

Comments

@lvsh2012
Copy link

lvsh2012 commented Feb 7, 2024

(base) root@yons-MS-7E06:/mnt/code/python/privateGPT# CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
Warning: Found deprecated key 'default' or 'secondary' in pyproject.toml configuration for source tsinghua. Please provide the key 'priority' instead. Accepted values are: 'default', 'primary', 'secondary', 'supplemental', 'explicit'.
Looking in indexes: http://mirrors.aliyun.com/pypi/simple/
Collecting llama-cpp-python
Downloading http://mirrors.aliyun.com/pypi/packages/af/a6/6b836876620823551650db19d217118b9ef0983a936aa7895ed5d05df9c0/llama_cpp_python-0.2.39.tar.gz (10.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.8/10.8 MB 45.4 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/b7/f4/6a90020cd2d93349b442bfcb657d0dc91eee65491600b2cb1d388bc98e6b/typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/3a/d0/edc009c27b406c4f9cbc79274d6e46d634d139075492ad055e3d68445925/numpy-1.26.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.3/18.3 MB 44.9 MB/s eta 0:00:00
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl (45 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 400.6 MB/s eta 0:00:00
Collecting jinja2>=2.11.3 (from llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/30/6d/6de6be2d02603ab56e72997708809e8a5b0fbfee080735109b40a3564843/Jinja2-3.1.3-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 349.6 MB/s eta 0:00:00
Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/97/18/c30da5e7a0e7f4603abfc6780574131221d9148f323752c2755d48abad30/MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [103 lines of output]
*** scikit-build-core 0.8.0 using CMake 3.28.1 (wheel)
*** Configuring CMake...
2024-02-07 17:13:07,860 - scikit_build_core - WARNING - libdir/ldlibrary: /root/miniconda3/envs/privateGPT/lib/libpython3.11.a is not a real file!
2024-02-07 17:13:07,860 - scikit_build_core - WARNING - Can't find a Python library, got libdir=/root/miniconda3/envs/privateGPT/lib, ldlibrary=libpython3.11.a, multiarch=x86_64-linux-gnu, masd=None
loading initial cache file /tmp/tmpxld2danh/build/CMakeInit.txt
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.25.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- Found CUDAToolkit: /usr/local/cuda-12.3/targets/x86_64-linux/include (found version "12.3.107")
-- cuBLAS found
CMake Error at /tmp/pip-build-env-c2k3mvlg/normal/lib/python3.11/site-packages/cmake/data/share/cmake-3.28/Modules/CMakeDetermineCompilerId.cmake:780 (message):
Compiling the CUDA compiler identification source file
"CMakeCUDACompilerId.cu" failed.

    Compiler: /usr/bin/nvcc

    Build flags:

    Id flags: --keep;--keep-dir;tmp -v



    The output was:

    255

    #$ _SPACE_=

    #$ _CUDART_=cudart

    #$ _HERE_=/usr/lib/nvidia-cuda-toolkit/bin

    #$ _THERE_=/usr/lib/nvidia-cuda-toolkit/bin

    #$ _TARGET_SIZE_=

    #$ _TARGET_DIR_=

    #$ _TARGET_SIZE_=64

    #$ NVVMIR_LIBRARY_DIR=/usr/lib/nvidia-cuda-toolkit/libdevice

    #$
    PATH=/usr/lib/nvidia-cuda-toolkit/bin:/tmp/pip-build-env-c2k3mvlg/overlay/bin:/tmp/pip-build-env-c2k3mvlg/normal/bin:/root/.cache/pypoetry/virtualenvs/private-gpt-pJZSlMmG-py3.11/bin:/root/.pyenv/shims:/root/.pyenv/bin:/root/.local/bin:/usr/local/cuda-12.3/bin:/root/miniconda3/bin:/root/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin


    #$ LIBRARIES= -L/usr/lib/x86_64-linux-gnu/stubs -L/usr/lib/x86_64-linux-gnu

    #$ rm tmp/a_dlink.reg.c

    #$ gcc -D__CUDA_ARCH__=300 -E -x c++ -DCUDA_DOUBLE_MATH_FUNCTIONS
    -D__CUDACC__ -D__NVCC__ -D__CUDACC_VER_MAJOR__=10 -D__CUDACC_VER_MINOR__=1
    -D__CUDACC_VER_BUILD__=243 -include "cuda_runtime.h" -m64
    "CMakeCUDACompilerId.cu" > "tmp/CMakeCUDACompilerId.cpp1.ii"

    #$ cicc --c++14 --gnu_version=90400 --allow_managed -arch compute_30 -m64
    -ftz=0 -prec_div=1 -prec_sqrt=1 -fmad=1 --include_file_name
    "CMakeCUDACompilerId.fatbin.c" -tused -nvvmir-library
    "/usr/lib/nvidia-cuda-toolkit/libdevice/libdevice.10.bc"
    --gen_module_id_file --module_id_file_name
    "tmp/CMakeCUDACompilerId.module_id" --orig_src_file_name
    "CMakeCUDACompilerId.cu" --gen_c_file_name
    "tmp/CMakeCUDACompilerId.cudafe1.c" --stub_file_name
    "tmp/CMakeCUDACompilerId.cudafe1.stub.c" --gen_device_file_name
    "tmp/CMakeCUDACompilerId.cudafe1.gpu" "tmp/CMakeCUDACompilerId.cpp1.ii" -o
    "tmp/CMakeCUDACompilerId.ptx"

    #$ ptxas -arch=sm_30 -m64 "tmp/CMakeCUDACompilerId.ptx" -o
    "tmp/CMakeCUDACompilerId.sm_30.cubin"

    ptxas fatal : Value 'sm_30' is not defined for option 'gpu-name'

    # --error 0xff --





  Call Stack (most recent call first):
    /tmp/pip-build-env-c2k3mvlg/normal/lib/python3.11/site-packages/cmake/data/share/cmake-3.28/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
    /tmp/pip-build-env-c2k3mvlg/normal/lib/python3.11/site-packages/cmake/data/share/cmake-3.28/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test)
    /tmp/pip-build-env-c2k3mvlg/normal/lib/python3.11/site-packages/cmake/data/share/cmake-3.28/Modules/CMakeDetermineCUDACompiler.cmake:135 (CMAKE_DETERMINE_COMPILER_ID)
    vendor/llama.cpp/CMakeLists.txt:327 (enable_language)


  -- Configuring incomplete, errors occurred!

  *** CMake configuration failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

[notice] A new release of pip is available: 23.3.1 -> 24.0
[notice] To update, run: pip install --upgrade pip

How did the experts solve this issue?

@sonaliverma
Copy link

@lvsh2012 , do you have build tool installed in your system?
https://visualstudio.microsoft.com/visual-cpp-build-tools/

@AntonKun
Copy link

AntonKun commented Feb 12, 2024

Try this:

pip uninstall llama-cpp-python
LLAMA_CUBLAS="1"
FORCE_CMAKE="1"
CMAKE_ARGS="-DLLAMA_CUBLAS=on"
python -m pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu117

@RickTorresJr
Copy link

RickTorresJr commented Mar 3, 2024

@lvsh2012 , do you have build tool installed in your system? https://visualstudio.microsoft.com/visual-cpp-build-tools/

Thanks. Installing only the Desktop Development with C++ option got me past this error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants