Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Failed building wheel for llama-cpp-python #1534

Closed
1 task done
Song367 opened this issue Apr 25, 2023 · 68 comments
Closed
1 task done

ERROR: Failed building wheel for llama-cpp-python #1534

Song367 opened this issue Apr 25, 2023 · 68 comments
Labels
bug Something isn't working stale

Comments

@Song367
Copy link

Song367 commented Apr 25, 2023

Describe the bug

install llama display error ERROR: Failed building wheel for llama-cpp-python

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

  1. pip install -r requirements.txt
  2. ERROR: Failed building wheel for llama-cpp-python

Screenshot

No response

Logs

Collecting llama-cpp-python==0.1.36
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/1b/ea/3f2aff10fd7195c6bc8c52375d9ff027a551151569c50e0d47581b14b7c1/llama_cpp_python-0.1.36.tar.gz (1.1 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in ./tenv/lib/python3.8/site-packages (from llama-cpp-python==0.1.36) (4.5.0)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [94 lines of output]

System Info

GPU
@Song367 Song367 added the bug Something isn't working label Apr 25, 2023
@innocentius
Copy link

Same error here, running on Ubuntu 18.04.

Wouldn't remove llama-cpp-python break something?

@Tom-Neverwinter
Copy link

Tom-Neverwinter commented Apr 26, 2023

is this an amd cpu?

not related to cpu.

@innocentius
Copy link

Error solved by upgrading to gcc-11. Try that first.

@xNul
Copy link
Contributor

xNul commented Apr 27, 2023

The same error here oobabooga/one-click-installers#30 (comment)

@BlairSadewitz
Copy link

Error solved by upgrading to gcc-11. Try that first.

That's what I did, and the error resolved.

@LiemLin
Copy link

LiemLin commented May 9, 2023

Error solved by upgrading to gcc-11. Try that first.

That's what I did, and the error resolved.

Is your operating system centos?

@maplessssy
Copy link

Error solved by upgrading to gcc-11. Try that first.

That's what I did, and the error resolved.

gcc-11 not work.

@madeepakkumar1
Copy link

upgrading gcc-11, did not work

@djdanielsson
Copy link

djdanielsson commented May 12, 2023

i am getting the same error and gcc-11 doesn't do anything. Ubuntu 22.04 and Fedora 37

@peitianyu
Copy link

Same error here, running on Ubuntu 18.04.

Wouldn't remove llama-cpp-python break something?

Have you solved the problem? im also running on Ubuntu 18.04.

@crearo
Copy link

crearo commented May 14, 2023

Updating to gcc-11 and g++-11 worked for me on Ubuntu 18.04.

Did that using sudo apt install gcc-11 and sudo apt install g++-11.

@robicity
Copy link

Error solved by upgrading to gcc-11. Try that first.

Using it on windows WSL i had additionally make a few more installations:

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install gcc-11 g++-11
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 60 --slave /usr/bin/g++ g++ /usr/bin/g++-11
pip install --upgrade pip
pip install --upgrade setuptools wheel
sudo apt-get install build-essential

It was all done to install oobabooga on windows WSL. Here my complete list for a windows 10 NVIDIA System:

# Update Ubuntu packages
sudo apt update
sudo apt upgrade

# Download and install Miniconda
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
bash Miniconda3.sh
rm Miniconda3.sh

# IMPORTANT - restart the terminal so it says (bash) in the beginning of the line

# Update conda, install wget, create and activate conda environment "textgen"
conda update conda
conda install wget
conda create -n textgen python=3.10.9
conda activate textgen

# Install CUDA libraries
pip3 install torch torchvision torchaudio

# Add PPA for gcc-11, update packages, install gcc-11, g++-11, update pip and setuptools, install build-essential
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install gcc-11 g++-11
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 60 --slave /usr/bin/g++ g++ /usr/bin/g++-11
pip install --upgrade pip
pip install --upgrade setuptools wheel
sudo apt-get install build-essential

# Clone and setup oobabooga text-generation-webui
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt

# Final update of Ubuntu packages
sudo apt update
sudo apt upgrade

@tanhm12
Copy link

tanhm12 commented May 19, 2023

Updating to gcc-11 and g++-11 worked for me on Ubuntu 18.04.

Did that using sudo apt install gcc-11 and sudo apt install g++-11.

This should be the accepted solution.
gcc-11 alone would not work, it needs both gcc-11 and g++-11.
After installing the two above, run CXX=g++-11 CC=gcc-11 pip install -r requirements.txt and it should work (at least for me).

@itgoldman
Copy link

nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48
(from pr #120)

@baphilia
Copy link

baphilia commented May 22, 2023

nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48
(from zylon-ai/private-gpt#120)

Thank you @itgoldman .
This worked on Windows (thanks chatgpt):

set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on"
set "FORCE_CMAKE=1"
pip install llama-cpp-python --no-cache-dir

@parasharamit
Copy link

Worked for me on Ubuntu18.04

sudo apt install software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt install gcc-11 g++-11
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 90 --slave /usr/bin/g++ g++ /usr/bin/g++-11 --slave /usr/bin/gcov gcov /usr/bin/gcov-11
sudo apt-get update

pip install -r requirements.txt

@gonewilds
Copy link

this is work for me

@Tw0sheds
Copy link

@robicity with the save - build-essential was the package for me, but I also tried a few methods mentioned previously so they could help you:

sudo apt-get install build-essential
sudo apt-get install gcc-11 g++-11

gcc11 or 12, it doesn't matter I don't think. with those installed you can rerun your pip command

@B0-B
Copy link

B0-B commented Jun 2, 2023

Hey everyone,
I installed a fresh ubuntu and this sequence solved this issue:

Update apt package manager and change into home directory

sudo apt-get update && cd ~ 

Install pre-requisites

sudo apt install curl &&
sudo apt install cmake -y &&
sudo apt install python3-pip -y &&
pip3 install testresources # dependency for launchpadlib

Also gcc-11 and g++-11 need to be installed to overcome this llama-cpp-python compilation issue

sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test &&
sudo apt install -y gcc-11 g++-11 &&
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 60 --slave /usr/bin/g++ g++ /usr/bin/g++-11 &&
pip3 install --upgrade pip &&
pip3 install --upgrade setuptools wheel &&
sudo apt-get install build-essential &&
gcc-11 --version # check if gcc works

Download the WebUI installer from repository and unpack it

wget https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip &&
unzip oobabooga_linux.zip && 
rm oobabooga_linux.zip

change into the downloaded folder and run the installer, this will download the necessary files etc. into a single folder

cd oobabooga_linux &&
bash start_linux.sh

Hope this helps!

@DJJones66
Copy link

Hey everyone, I installed a fresh ubuntu and this sequence solved this issue:

Update apt package manager and change into home directory

sudo apt-get update && cd ~ 

Install pre-requisites

sudo apt install curl &&
sudo apt install cmake -y &&
sudo apt install python3-pip -y &&
pip3 install testresources # dependency for launchpadlib

Also gcc-11 and g++-11 need to be installed to overcome this llama-cpp-python compilation issue

sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test &&
sudo apt install -y gcc-11 g++-11 &&
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 60 --slave /usr/bin/g++ g++ /usr/bin/g++-11 &&
pip3 install --upgrade pip &&
pip3 install --upgrade setuptools wheel &&
sudo apt-get install build-essential &&
gcc-11 --version # check if gcc works

Download the WebUI installer from repository and unpack it

wget https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip &&
unzip oobabooga_linux.zip && 
rm oobabooga_linux.zip

change into the downloaded folder and run the installer, this will download the necessary files etc. into a single folder

cd oobabooga_linux &&
bash start_linux.sh

Hope this helps!

Perfect, flawless. Someone needs to add this to the docs

@DavidInRacine
Copy link

DavidInRacine commented Jun 6, 2023

Same issue here in KDE Neon with GCC 11.3.0 and G++ 11.3.0, and also in Manjaro with GCC 12.2.x. In Manjaro oobabooga complained that it could find GCC 9 compiler. None of the solutions in this thread nor in the oobabooga Reddit thread titled 'Failed building wheel for llama-cpp-python' worked.

Curiously, I had no problem rolling oobabooga with all wheels attached in Linux Mint 21.1. I don't remember which compiler version is in Linux Mint 21.1, probably GCC 11.3.0. I did not have to jump through any hoops nor whisper sacred incantations while shaking a chicken foot and turning around three times with my eyes closed.

I don't think anyone really got to the bottom of this Llama-cpp-python wheel failure issue in a systematic way, especially when one Debian derivative works (Linux Mint) and another Debian variant (KDE Neon) does not.

15:49 - Edited to correct a typo and improve legibility.

@LeafmanZ
Copy link

LeafmanZ commented Jun 7, 2023

godbless you this worked

nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48
(from imartinez/privateGPT#120)

Thank you @itgoldman . This worked on Windows (thanks chatgpt):

set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on"
set "FORCE_CMAKE=1"
pip install llama-cpp-python --no-cache-dir

Iakovenko-Oleksandr added a commit to Iakovenko-Oleksandr/text-generation-webui that referenced this issue Jun 8, 2023
The commands as they are did not work in my Windows Anaconda prompt (compilatin failed), then with changes suggested in oobabooga#1534 (comment) it worked.
@nightwalker89
Copy link

Updating to gcc-11 and g++-11 worked for me on Ubuntu 18.04.
Did that using sudo apt install gcc-11 and sudo apt install g++-11.

This should be the accepted solution. gcc-11 alone would not work, it needs both gcc-11 and g++-11. After installing the two above, run CXX=g++-11 CC=gcc-11 pip install -r requirements.txt and it should work (at least for me).

It works for me, thank you

@atharvapatiil
Copy link

First, install
conda install -c conda-forge cxx-compiler
And then try running pip install llama-cpp-python==0.1.48
It worked for me. As it will pick c++ compiler from conda instead of root machine. So, without changing compiler version you will able to install lamma

@VapoZ
Copy link

VapoZ commented Jun 14, 2023

For Windows:

  1. make sure you're using Visual Studio 2022 and not just Visual Studio Code
  2. Open the "x64 Native Tools Command Prompt for Visual Studio 2022" (u can search for it after pressing windows key)
  3. run pip install llama-cpp-python==0.1.48
  4. ...
  5. profit

@sanjana-sudo
Copy link

@robicity with the save - build-essential was the package for me, but I also tried a few methods mentioned previously so they could help you:
sudo apt-get install build-essential sudo apt-get install gcc-11 g++-11
gcc11 or 12, it doesn't matter I don't think. with those installed you can rerun your pip command

On a clean install of Ubuntu 22.04 LTS, just adding sudo apt-get install build-essential was enough for me. On my recent 22.04 (installed July 2023), gcc was already version 11.3.0

I ran the ./start_linux.sh script after the first error and that failed to reinstall (probably because enough of it was installed by the time of the llama error that the start script no longer worked).

In lieu of deleting all the stuff and starting over, I just ran update_linux.sh instead and it picked up from where it died the first time and appears to be running correctly.

@filmo this worked like a charm! Thank you

@syedhabib53
Copy link

nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48
(from imartinez/privateGPT#120)

Thank you @itgoldman . This worked on Windows (thanks chatgpt):

set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on"
set "FORCE_CMAKE=1"
pip install llama-cpp-python --no-cache-dir

In addition to this, I added following in the environment (.bashr or .zshr), to successfully install llama-cpp-python in ubuntu 22.04

export CUDA_HOME=/usr/local/cuda-12.2
export PATH=${CUDA_HOME}/bin:${PATH}
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH

@bxdoan
Copy link

bxdoan commented Aug 24, 2023

Nice bro @syedhabib53, this work from my side

@AnakinChou
Copy link

g++11 works for me.
Before pip install -r requirement, try something like
export CC=/usr/bin/gcc export CXX=/usr/bin/g++

@srinjcha
Copy link

For Ubuntu 22.04.2

Had to do following which worked for me

sudo apt update
sudo apt-get install build-essential
sudo apt-get install ninja-build
pip install -r requirements.txt

@RIandAI
Copy link

RIandAI commented Sep 8, 2023

First, install conda install -c conda-forge cxx-compiler And then try running pip install llama-cpp-python==0.1.48 It worked for me. As it will pick c++ compiler from conda instead of root machine. So, without changing compiler version you will able to install lamma

This worked for me on Fedora 38.

@YerongLi
Copy link

nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48 (from pr #120)

This is the fix for me, gcc 11 with g++ 11 won't resolve this.

@luckyops
Copy link

luckyops commented Sep 28, 2023

I use Windows, need to install vs2022 ,
choose C++ and python,
It's work for me .

@thomaswengerter
Copy link

For Centos 7:
Follow instructions here to update to gcc 11.
zylon-ai/private-gpt#644 (comment)

@AleNunezArroyo
Copy link

Unfortunately, none of the options worked for me. However, this command worked for me by changing the version of Cuda (Know the version with locate libcudart.so):

CUDACXX=/usr/local/cuda-12/bin/nvcc CMAKE_ARGS="-DLLAMA_CUBLAS=on -DCMAKE_CUDA_ARCHITECTURES=native" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir --force-reinstall --upgrade

Thanks to the user who solved this. Source

@github-actions github-actions bot added the stale label Jan 4, 2024
Copy link

github-actions bot commented Jan 4, 2024

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

@github-actions github-actions bot closed this as completed Jan 4, 2024
@AFTAB685
Copy link

AFTAB685 commented Jan 31, 2024

Installing VS helped me to solve on window https://visualstudio.microsoft.com/visual-cpp-build-tools/

@rigvedrs
Copy link

Since I am working on kaggle and colab both, I realised that I had to use two different solutions for the same problem.

For Colab, the solution was to run

!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install  llama-cpp-python --no-cache-dir

For Kaggle, the solution was to run

!set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on"
!set "FORCE_CMAKE=1"
!pip install llama-cpp-python --no-cache-dir

The reason is in the difference between the code itself. Colab is using the CuBLAS library, whereas Kaggle is using the OpenBLAS library for GPU accelaration. This is why different solutions are working for different people which has already been specified here. So figure out which library your system is using and try to use one of these solutions.

If none of the above solutions work, you can try upgrading to gcc-11, which seems to be another common solution for this.

!sudo apt install gcc-11

!sudo apt install g++-11

I lost a lot of time figuring out the solution to this, hope this saves yours!

@mike-fischer-ml
Copy link

nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48
(from imartinez/privateGPT#120)

Thank you @itgoldman . This worked on Windows (thanks chatgpt):

set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on"
set "FORCE_CMAKE=1"
pip install llama-cpp-python --no-cache-dir

It's still not working for me

thank you, this helped solve my llama-cpp install issue on pop-os (ubuntu).

@KayvanShah1
Copy link

What is already installed on my system?

  • OS: Windows 11
  • Python 3.10.10
  • MinGW64 13.2.0-rt_v11-rev1
  • cmake 3.29.0-rc4
  • Microsoft Visual 2015-2022 C++ Redistributable (x64) 14.38.33135

I still get the below error

Using cached diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [20 lines of output]
      *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel)
      *** Configuring CMake...
      2024-03-20 23:36:24,626 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None
      loading initial cache file C:\Users\shahk\AppData\Local\Temp\tmpa6qxmrbl\build\CMakeInit.txt
      -- Building for: NMake Makefiles
      CMake Error at CMakeLists.txt:3 (project):
        Running

         'nmake' '-?'

        failed with:

         no such file or directory


      CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
      CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
      -- Configuring incomplete, errors occurred!

      *** CMake configuration failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

@fahmidme
Copy link

For me, I was running into this error trying to run

pip install llama-index-llms-sagemaker-endpoint

What fixed it for me:

sudo xcode-select -s /Applications/Xcode.app/Contents/Developer

Optional backstory: I kept getting the visionOS verifying popup bug for days until I got frustrated and deleted Xcode entirely. Rookie mistake. Had to install Xcode again and properly link it, etc.

@RISHIKKASULA
Copy link

RISHIKKASULA commented Apr 16, 2024

facing same issue

trying to carry out OPENAI by imartinez

need solution for the following

Screenshot 2024-04-16 162400

Screenshot 2024-04-16 162133
Screenshot 2024-04-16 162317

@kimonk0299
Copy link

I am using Centos7 and this worked for me!!

sudo yum install centos-release-scl
sudo yum install devtoolset-11-gcc devtoolset-11-gcc-c++ devtoolset-11-gcc-gfortran
scl enable devtoolset-11 bash

sudo yum install make automake cmake

export CC=/opt/rh/devtoolset-11/root/usr/bin/gcc
export CXX=/opt/rh/devtoolset-11/root/usr/bin/g++

pip install --upgrade pip setuptools wheel

CMAKE_ARGS="-DUSE_SOME_OPTION=ON" pip install llama-cpp-python

@amir2628
Copy link

I am using windows and had the same issue. I could install old version of llama-cpp-python but I had issues with the new gguf models from hugging face.
What worked for me was:

pip install --no-cache-dir llama-cpp-python==0.2.77 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124/

Notice the cuda version in the link. My version was 12.5 and it still worked.

@akash-sardar
Copy link

nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48
(from zylon-ai/private-gpt#120)

Thank you @itgoldman . This worked on Windows (thanks chatgpt):

set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on"
set "FORCE_CMAKE=1"
pip install llama-cpp-python --no-cache-dir

Thus worked. Thanks !

@MINJIK01
Copy link

I am using windows and had the same issue. I could install old version of llama-cpp-python but I had issues with the new gguf models from hugging face. What worked for me was:

pip install --no-cache-dir llama-cpp-python==0.2.77 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124/

Notice the cuda version in the link. My version was 12.5 and it still worked.

This worked for me! Thanks a lot!

@Kareem21
Copy link

Kareem21 commented Oct 6, 2024

Building wheel for llama-cpp-python (pyproject.toml) ...

Does anyone else get stuck here? My program just stops right here and doesnt continue

@np-n
Copy link

np-n commented Oct 29, 2024

pip install --no-cache-dir llama-cpp-python==0.2.77 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124/

Thanks alot. pip install --no-cache-dir llama-cpp-python==0.2.85 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu122 worked for me. Currently I am installing llama-cpp-python in ubuntu having RTX 3090 Ti GPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests