Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects #244

Closed
mindwellsolutions opened this issue May 19, 2023 · 11 comments
Labels
build hardware Hardware specific issue llama.cpp Problem with llama.cpp shared lib

Comments

@mindwellsolutions
Copy link

mindwellsolutions commented May 19, 2023

Shortened ERROR Text:
"Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. [exit code: 1]"
"Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects."

Prior to trying to install llama-cpp-python I installed Cuda, Ubuntu Build Essentials, Cmake, but still get this error everytime I try to install llama-cpp-python.

Installation methods tried:

  1. pip install llama-cpp-python
  2. sudo pip install llama-cpp-python

I also tried running the dockerfile.txt that glmulder shared 4 days ago @ (Link) and got an identical error.

  1. docker build -t dockerfile.txt .

Full Error Text:

pip install llama-cpp-python

Defaulting to user installation because normal site-packages is not writeable

Collecting llama-cpp-python

Using cached llama_cpp_python-0.1.51.tar.gz (1.2 MB)

Installing build dependencies ... done

Getting requirements to build wheel ... done

Preparing metadata (pyproject.toml) ... done

Collecting typing-extensions>=4.5.0

Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)

Building wheels for collected packages: llama-cpp-python

Building wheel for llama-cpp-python (pyproject.toml) ... error

error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.

│ exit code: 1

╰─> [135 lines of output]

       -- Trying 'Ninja' generator

  Not searching for unused variables given on the command line.

  -- The C compiler identification is GNU 11.3.0

  -- Detecting C compiler ABI info

  -- Detecting C compiler ABI info - done

  -- Check for working C compiler: /usr/bin/cc - skipped

  -- Detecting C compile features

  -- Detecting C compile features - done

  -- The CXX compiler identification is GNU 11.3.0

  -- Detecting CXX compiler ABI info

  -- Detecting CXX compiler ABI info - done

  -- Check for working CXX compiler: /usr/bin/c++ - skipped

  -- Detecting CXX compile features

  -- Detecting CXX compile features - done

  -- Configuring done (0.4s)

  -- Generating done (0.0s)

  -- Build files have been written to: /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f/_cmake_test_compile/build


  -- Trying 'Ninja' generator - success    

  Configuring Project

    Working directory:

      /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f/_skbuild/linux-x86_64-3.10/cmake-build

    Command:

      /tmp/pip-build-env-1hgf9955/overlay/local/lib/python3.10/dist-packages/cmake/data/bin/cmake /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-1hgf9955/overlay/local/lib/python3.10/dist-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f/_skbuild/linux-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.6 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-1hgf9955/overlay/local/lib/python3.10/dist-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/usr/bin/python3 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPYTHON_LIBRARY:PATH=/usr/lib/x86_64-linux-gnu/libpython3.10.so -DPython_EXECUTABLE:PATH=/usr/bin/python3 -DPython_ROOT_DIR:PATH=/usr -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPython3_EXECUTABLE:PATH=/usr/bin/python3 -DPython3_ROOT_DIR:PATH=/usr -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-1hgf9955/overlay/local/lib/python3.10/dist-packages/ninja/data/bin/ninja -DCMAKE_BUILD_TYPE:STRING=Release

  

  Not searching for unused variables given on the command line.

  -- The C compiler identification is GNU 11.3.0

  -- The CXX compiler identification is GNU 11.3.0

  -- Detecting C compiler ABI info

  -- Detecting C compiler ABI info - done

  -- Check for working C compiler: /usr/bin/cc - skipped

  -- Detecting C compile features

  -- Detecting C compile features - done

  -- Detecting CXX compiler ABI info

  -- Detecting CXX compiler ABI info - done

  -- Check for working CXX compiler: /usr/bin/c++ - skipped

  -- Detecting CXX compile features

  -- Detecting CXX compile features - done

  -- Configuring done (0.4s)

  -- Generating done (0.0s)

  -- Build files have been written to: /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f/_skbuild/linux-x86_64-3.10/cmake-build

  [1/2] Generating /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f/vendor/llama.cpp/libllama.so

  FAILED: /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f/vendor/llama.cpp/libllama.so

  cd /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f/vendor/llama.cpp && make libllama.so

  I llama.cpp build info:

  I UNAME_S:  Linux

  I UNAME_P:  x86_64

  I UNAME_M:  x86_64

  I CFLAGS:   -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native

  I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native

  I LDFLAGS:

  I CC:       cc (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0

  I CXX:      g++ (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0

  

  g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -c llama.cpp -o llama.o

  llama.cpp: In function ‘size_t llama_set_state_data(llama_context*, const uint8_t*)’:

  llama.cpp:2686:27: warning: cast from type ‘const uint8_t*’ {aka ‘const unsigned char*’} to type ‘void*’ casts away qualifiers [-Wcast-qual]

   2686 |             kin3d->data = (void *) inp;

        |                           ^~~~~~~~~~~~

  llama.cpp:2690:27: warning: cast from type ‘const uint8_t*’ {aka ‘const unsigned char*’} to type ‘void*’ casts away qualifiers [-Wcast-qual]

   2690 |             vin3d->data = (void *) inp;

        |                           ^~~~~~~~~~~~

  cc  -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native   -c ggml.c -o ggml.o

  In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:99,

                   from ggml.c:189:

  ggml.c: In function ‘ggml_vec_dot_q4_0_q8_0’:

  /usr/lib/gcc/x86_64-linux-gnu/11/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch

     63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)

        | ^~~~~~~~~~~~~~~

  ggml.c:2187:15: note: called from here

   2187 |         acc = _mm256_fmadd_ps( d, q, acc );

        |               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~

  In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:99,

                   from ggml.c:189:

  /usr/lib/gcc/x86_64-linux-gnu/11/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch

     63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)

        | ^~~~~~~~~~~~~~~

  ggml.c:2187:15: note: called from here

   2187 |         acc = _mm256_fmadd_ps( d, q, acc );

        |               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~

  In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:99,

                   from ggml.c:189:

  /usr/lib/gcc/x86_64-linux-gnu/11/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch

     63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)

        | ^~~~~~~~~~~~~~~

  ggml.c:2187:15: note: called from here

   2187 |         acc = _mm256_fmadd_ps( d, q, acc );

        |               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~

  In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:99,

                   from ggml.c:189:

  /usr/lib/gcc/x86_64-linux-gnu/11/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch

     63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)

        | ^~~~~~~~~~~~~~~

  ggml.c:2187:15: note: called from here

   2187 |         acc = _mm256_fmadd_ps( d, q, acc );

        |               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~

  make: *** [Makefile:186: ggml.o] Error 1

  ninja: build stopped: subcommand failed.

  Traceback (most recent call last):

    File "/tmp/pip-build-env-1hgf9955/overlay/local/lib/python3.10/dist-packages/skbuild/setuptools_wrap.py", line 674, in setup

      cmkr.make(make_args, install_target=cmake_install_target, env=env)

    File "/tmp/pip-build-env-1hgf9955/overlay/local/lib/python3.10/dist-packages/skbuild/cmaker.py", line 697, in make

      self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env)

    File "/tmp/pip-build-env-1hgf9955/overlay/local/lib/python3.10/dist-packages/skbuild/cmaker.py", line 742, in make_impl

      raise SKBuildError(msg)  

  An error occurred while building with CMake.

    Command:

      /tmp/pip-build-env-1hgf9955/overlay/local/lib/python3.10/dist-packages/cmake/data/bin/cmake --build . --target install --config Release --

    Install target:

      install

    Source directory:

      /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f

    Working directory:

      /tmp/pip-install-2nfpinsr/llama-cpp-python_152a175d248a4cc8898c0a212c9f068f/_skbuild/linux-x86_64-3.10/cmake-build

  Please check the install target is valid and see CMake's output for more information.

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

ERROR: Failed building wheel for llama-cpp-python

Failed to build llama-cpp-python

ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

@gjmulder
Copy link
Contributor

ggml.c:2187:15: note: called from here

2187 | acc = _mm256_fmadd_ps( d, q, acc );

That's an error where the compiler thinks your hardware supports some Intel CPU acceleration features, but in fact it doesn't. Are you by chance compiling in a VM?

@gjmulder gjmulder added build hardware Hardware specific issue llama.cpp Problem with llama.cpp shared lib labels May 19, 2023
@mindwellsolutions
Copy link
Author

mindwellsolutions commented May 19, 2023

ggml.c:2187:15: note: called from here
2187 | acc = _mm256_fmadd_ps( d, q, acc );

That's an error where the compiler thinks your hardware supports some Intel CPU acceleration features, but in fact it doesn't. Are you by chance compiling in a VM?

Thank you. Yes, exactly I am running Ubuntu 22.04.2 in Virtual Box. Is there a specific setting I should turn on / off?

@gjmulder
Copy link
Contributor

This might help.

@mindwellsolutions
Copy link
Author

mindwellsolutions commented May 19, 2023

Thank you that fixed the installation. Everything works perfectly now. Much appreciated :)

@mindwellsolutions mindwellsolutions changed the title Build Fails on Ubuntu 22.04.2 (Tried a fresh install same issue) [SOLVED] Build Fails on Ubuntu 22.04.2 (Tried a fresh install same issue) May 19, 2023
@mindwellsolutions mindwellsolutions changed the title [SOLVED] Build Fails on Ubuntu 22.04.2 (Tried a fresh install same issue) [SOLVED] Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects May 19, 2023
@abetlen abetlen closed this as completed May 19, 2023
@AceyKubbo
Copy link

hey guys,my os is centos7,when reinstall and upgrade llama-cpp-python ,it shows error with cmake,how to fix it

 ninja: build stopped: subcommand failed.
      Traceback (most recent call last):
        File "/tmp/pip-build-env-9dr9_54j/overlay/lib/python3.10/site-packages/skbuild/setuptools_wrap.py", line 674, in setup
          cmkr.make(make_args, install_target=cmake_install_target, env=env)
        File "/tmp/pip-build-env-9dr9_54j/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 697, in make
          self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env)
        File "/tmp/pip-build-env-9dr9_54j/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 742, in make_impl
          raise SKBuildError(msg)
      
      An error occurred while building with CMake.
        Command:
          /tmp/pip-build-env-9dr9_54j/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake --build . --target install --config Release --
        Install target:
          install
        Source directory:
          /tmp/pip-install-s0impcv3/llama-cpp-python_55123ed92da142ad9f2df4fe097b143f
        Working directory:
          /tmp/pip-install-s0impcv3/llama-cpp-python_55123ed92da142ad9f2df4fe097b143f/_skbuild/linux-x86_64-3.10/cmake-build
      Please check the install target is valid and see CMake's output for more information.
      
      [end of output]

@gjmulder gjmulder changed the title [SOLVED] Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects Jun 8, 2023
@gjmulder
Copy link
Contributor

gjmulder commented Jun 8, 2023

Please open a new issue and provide the complete build output as per the issue template.

@laurafbec
Copy link

Hi @mindwellsolutions
I would like to know if you were able to compile llama-cpp on a VM by disabling Hyper-V. Thanks in advance!

@mindwellsolutions
Copy link
Author

mindwellsolutions commented Jul 13, 2023

Hi @mindwellsolutions I would like to know if you were able to compile llama-cpp on a VM by disabling Hyper-V. Thanks in advance!

Yes. I disabled PAE/NX and VT-X/AMD-V ( Hyper-V) on VirtualBox settings for the VM. I have Paravirtualization Interface set to Default. I also turned of "Harware virtualization" disabled nested paging. (Although I'm not sure if this is needed).

image

image

@laurafbec
Copy link

Thanks @mindwellsolutions!! It worked for me!

@sejba
Copy link

sejba commented Aug 10, 2023

FYI regarding this:

I also turned of "Hardware virtualization" disabled nested paging. (Although I'm not sure if this is needed).

Yes, it is needed. I had PAE/NX and VT-X/AMD-V already disabled and also Paravirtualization Interface set to Default. And I was still getting the error no matter what.

Finally, disabling "Nested Paging" did the trick.

@HITESH2002-JAIN
Copy link

HITESH2002-JAIN commented Sep 30, 2023

This might help.

I have dual booted my PC and tried building a docker image on Linux. I did not get what exactly do I need to do for avoiding this error. It would be really helpful if you could please guide me with some steps ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build hardware Hardware specific issue llama.cpp Problem with llama.cpp shared lib
Projects
None yet
Development

No branches or pull requests

7 participants