Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot build wheel #1538

Open
LankyPoet opened this issue Jun 17, 2024 · 4 comments
Open

Cannot build wheel #1538

LankyPoet opened this issue Jun 17, 2024 · 4 comments

Comments

@LankyPoet
Copy link

Hi,
I am running Windows 11, Python 3.11.9, and comfyui in a venv environment.
I tried installing the latest llama-cpp-python for Cuda 1.24 in the below manner and received a string of errors. Can anyone assist please?

(venv) D:\ComfyUI>pip install llama-cpp-python --verbose --extra-index-url https://abetlen.github.io/llama-cpp-python/wh
l/cu124
Using pip 24.0 from D:\ComfyUI\venv\Lib\site-packages\pip (python 3.11)
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com, https://abetlen.github.io/llama-cpp-python/whl/cu124
Collecting llama-cpp-python
  Downloading llama_cpp_python-0.2.78.tar.gz (50.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.2/50.2 MB 32.7 MB/s eta 0:00:00
  Running command pip subprocess to install build dependencies
  Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com, https://pypi.ngc.nvidia.com, https://abetlen.github.io/llama-cpp-python/whl/cu124
  Collecting scikit-build-core>=0.9.2 (from scikit-build-core[pyproject]>=0.9.2)
    Downloading scikit_build_core-0.9.6-py3-none-any.whl.metadata (19 kB)
  Collecting packaging>=21.3 (from scikit-build-core>=0.9.2->scikit-build-core[pyproject]>=0.9.2)
    Downloading packaging-24.1-py3-none-any.whl.metadata (3.2 kB)
  Collecting pathspec>=0.10.1 (from scikit-build-core>=0.9.2->scikit-build-core[pyproject]>=0.9.2)
    Downloading pathspec-0.12.1-py3-none-any.whl.metadata (21 kB)
  Downloading scikit_build_core-0.9.6-py3-none-any.whl (152 kB)
     ---------------------------------------- 152.3/152.3 kB 4.6 MB/s eta 0:00:00
  Downloading packaging-24.1-py3-none-any.whl (53 kB)
     ---------------------------------------- 54.0/54.0 kB ? eta 0:00:00
  Downloading pathspec-0.12.1-py3-none-any.whl (31 kB)
  Installing collected packages: pathspec, packaging, scikit-build-core
  Successfully installed packaging-24.1 pathspec-0.12.1 scikit-build-core-0.9.6
  Installing build dependencies ... done
  Running command Getting requirements to build wheel
  Could not determine CMake version via --version, got '' 'Traceback (most recent call last):\n  File "<frozen runpy>", line 198, in _run_module_as_main\n  File "<frozen runpy>", line 88, in _run_code\n  File "D:\\ComfyUI\\venv\\Scripts\\cmake.EXE\\__main__.py", line 4, in <module>\nModuleNotFoundError: No module named \'cmake\'\n'
  Getting requirements to build wheel ... done
  Running command pip subprocess to install backend dependencies
  Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com, https://pypi.ngc.nvidia.com, https://abetlen.github.io/llama-cpp-python/whl/cu124
  Collecting cmake>=3.21
    Downloading cmake-3.29.5.1-py3-none-win_amd64.whl.metadata (6.1 kB)
  Downloading cmake-3.29.5.1-py3-none-win_amd64.whl (36.2 MB)
     ---------------------------------------- 36.2/36.2 MB 65.2 MB/s eta 0:00:00
  Installing collected packages: cmake
  Successfully installed cmake-3.29.5.1
  Installing backend dependencies ... done
  Running command Preparing metadata (pyproject.toml)
  *** scikit-build-core 0.9.6 using CMake 3.29.5 (metadata_wheel)
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in d:\comfyui\venv\lib\site-packages (from llama-cpp-python) (4.12.2)
Requirement already satisfied: numpy>=1.20.0 in d:\comfyui\venv\lib\site-packages (from llama-cpp-python) (1.26.4)
Requirement already satisfied: diskcache>=5.6.1 in d:\comfyui\venv\lib\site-packages (from llama-cpp-python) (5.6.3)
Requirement already satisfied: jinja2>=2.11.3 in d:\comfyui\venv\lib\site-packages (from llama-cpp-python) (3.1.4)
Requirement already satisfied: MarkupSafe>=2.0 in d:\comfyui\venv\lib\site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.5)
Building wheels for collected packages: llama-cpp-python
  Running command Building wheel for llama-cpp-python (pyproject.toml)
  *** scikit-build-core 0.9.6 using CMake 3.29.5 (wheel)
  *** Configuring CMake...
  2024-06-17 12:05:30,337 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None
  loading initial cache file C:\Users\DEFAUL~1.LIV\AppData\Local\Temp\tmpy2u4tmwa\build\CMakeInit.txt
  -- Building for: Visual Studio 17 2022
  -- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22631.
  -- The C compiler identification is MSVC 19.39.33520.0
  -- The CXX compiler identification is MSVC 19.39.33520.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.39.33519/bin/Hostx64/x64/cl.exe - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.39.33519/bin/Hostx64/x64/cl.exe - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.45.2.windows.1")
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
  -- Looking for pthread_create in pthreads
  -- Looking for pthread_create in pthreads - not found
  -- Looking for pthread_create in pthread
  -- Looking for pthread_create in pthread - not found
  -- Found Threads: TRUE
  -- Found OpenMP_C: -openmp (found version "2.0")
  -- Found OpenMP_CXX: -openmp (found version "2.0")
  -- Found OpenMP: TRUE (found version "2.0")
  -- OpenMP found
  -- Found CUDAToolkit: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.4/include (found version "12.4.131")
  -- CUDA found
  CMake Error at C:/Users/Default.LivingRoomPC/AppData/Local/Temp/pip-build-env-9sqow3ao/normal/Lib/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:563 (message):
    No CUDA toolset found.
  Call Stack (most recent call first):
    C:/Users/Default.LivingRoomPC/AppData/Local/Temp/pip-build-env-9sqow3ao/normal/Lib/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
    C:/Users/Default.LivingRoomPC/AppData/Local/Temp/pip-build-env-9sqow3ao/normal/Lib/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test)
    C:/Users/Default.LivingRoomPC/AppData/Local/Temp/pip-build-env-9sqow3ao/normal/Lib/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCUDACompiler.cmake:131 (CMAKE_DETERMINE_COMPILER_ID)
    vendor/llama.cpp/CMakeLists.txt:411 (enable_language)


  -- Configuring incomplete, errors occurred!

  *** CMake configuration failed
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.

  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: 'D:\ComfyUI\venv\Scripts\python.exe' 'D:\ComfyUI\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py' build_wheel 'C:\Users\DEFAUL~1.LIV\AppData\Local\Temp\tmpa1tcm9ca'
  cwd: C:\Users\Default.LivingRoomPC\AppData\Local\Temp\pip-install-gn5_hb5u\llama-cpp-python_11e565ea5e874456a27660546fbad291
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

@AspiringPianist
Copy link

Yes, same issue. Using a python 3.10.6 virtual environment

@dreambottle
Copy link

check this reply

#1535 (comment)

@LankyPoet
Copy link
Author

check this reply

#1535 (comment)

Thank you. Not a bad workaround to get going, but I agree with you, I am really hoping we keep seeing updated CUDA builds. New models come out constantly so it's important to stay up on llama.cpp versions.

@javierxio
Copy link

This is how i got it to work. Took me a day to figure out.

  • Uninstall build tools (standalone version)
  • From Visual Studio, install the Desktop Development with C++.
  • Reinstall CUDA
  • run the pip install

#1352 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants