Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning: "The installed version of bitsandbytes was compiled without GPU support." #112

Closed
sbrnaderi opened this issue Jan 2, 2023 · 59 comments

Comments

@sbrnaderi
Copy link

Issue

When I run the following line of code:

pipe = pipeline(model=name, model_kwargs= {"device_map": "auto", "load_in_8bit": True}, max_new_tokens=max_new_tokens)

I get the following warning message:

"The installed version of bitsandbytes was compiled without GPU support"

and the following error at the end:

AttributeError: /miniconda3/envs/bits/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats

My setup

Ubuntu 18.04 on windows WSL
Cuda version: 11.4 (I can confirm this with nvidia-smi command)
Pytorch installed using conda: conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
installed the following python packages after installing pytorch:

My hardware

NVIDIA GPU RTX2060 SUPER (8GB)
AMD CPU (12 cores)

My investigations so far

torch.cuda.is_available() --> returns True

from bitsandbytes.cextension import CUDASetup
import torch
lib = CUDASetup.get_instance().lib
lib.cadam32bit_g32

This returns the following error:
AttributeError: /miniconda3/envs/bits/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_g32

if I run

CUDASetup.get_instance().generate_instructions()
CUDASetup.get_instance().print_log_stack()

I get:

CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!
CUDA SETUP: CUDA runtime path found: /home/saber/miniconda3/envs/bits/lib/libcudart.so
CUDA SETUP: Loading binary /home/saber/miniconda3/envs/bits/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected.
CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig.
CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following:
CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null
CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a
CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc
@sbrnaderi
Copy link
Author

I managed to get it work using docker + vscode Dev Containers extension. Here is a repo that I have made in case someone else have the same issue and is looking for a quick solution. This repo can be used as a basis for your project.

@abacaj
Copy link

abacaj commented Jan 11, 2023

hi @sbrnaderi what was the change to fix the issue? I had to use another version (https://pypi.org/project/bitsandbytes-cuda117/) in order for it to work on linux

@sbrnaderi
Copy link
Author

hi @abacaj , in my dockerfile, I start from the latest pytorch docker image and install the bitsandbytes using pip install bitsandbytes and this seems to work.

Initially, I tried to install pytorch and bitsandbytes on Ubuntu 18.04 (on windows WSL) and I got the error that I mentioned above. Changing the bitsandbytes cuda version to the version that I got from conda list | grep cudatoolkit did not seem to help in my case. Note that this issue might be specific to Ubuntu on windows WSL.

@Coderik
Copy link

Coderik commented Jan 14, 2023

I am getting the same error with the following Dockerfile:

FROM pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime

RUN apt update && apt install -y wget

RUN pip install bitsandbytes

RUN https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py && python check_bnb_install.py

@abacaj
Copy link

abacaj commented Jan 15, 2023

I am getting the same error with the following Dockerfile:

FROM pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime

RUN apt update && apt install -y wget

RUN pip install bitsandbytes

RUN https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py && python check_bnb_install.py

try pip install bitsandbytes-cuda116

@detkov
Copy link

detkov commented Jan 17, 2023

I am getting the same error with the following Dockerfile:

FROM pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime

RUN apt update && apt install -y wget

RUN pip install bitsandbytes

RUN https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py && python check_bnb_install.py

try pip install bitsandbytes-cuda116

It helped, but this way bitsandbytes-cuda116 version is 0.26.0.post2, and there is a warning:
WARNING! This version of bitsandbytes is deprecated. Please switch to pip install bitsandbytes and the new repo: https://github.com/TimDettmers/bitsandbytes

@AbstractQbit
Copy link

Another workaround is to symlink libcuda into your env

ln -s /usr/lib/wsl/lib/libcuda.so [path to your env here]/lib/libcuda.so

@Jehuty-ML
Copy link

Another workaround is to symlink libcuda into your env

ln -s /usr/lib/wsl/lib/libcuda.so [path to your env here]/lib/libcuda.so

thanks alot. I successed run it on wsl2

@yonikremer
Copy link

@AbstractQbit That didn't work for me...
I my lib/libcuda.so is located elsewhere and I changed /usr/lib/wsl/lib/libcuda.so to the correct path yet I still have this issue.

@lanlanabcd
Copy link

I have the same problem, and it seems that none of solutions above works for me.

My working environment is:
Ubuntu 18.04
CUDA Version: 11.2
A40 GPU

@lanlanabcd
Copy link

lanlanabcd commented Mar 9, 2023

It seems to work after I replace lib/python3.8/site-packages/bitsandbytes/lib/bitsandbytes_cpu.so with lib/python3.8/site-packages/bitsandbytes/lib/bitsandbytes_cuda112.so

@liyaxin999
Copy link

hi @abacaj , in my dockerfile, I start from the latest pytorch docker image and install the bitsandbytes using pip install bitsandbytes and this seems to work.

Initially, I tried to install pytorch and bitsandbytes on Ubuntu 18.04 (on windows WSL) and I got the error that I mentioned above. Changing the bitsandbytes cuda version to the version that I got from conda list | grep cudatoolkit did not seem to help in my case. Note that this issue might be specific to Ubuntu on windows WSL.

Hi @sbrnaderi Thank you so much for sharing the Dockerfile, in my case, to get my docker container running, I have to set
RUN echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/' >> ~/.bashrc

@ThatCoffeeGuy
Copy link

Same issue here, fresh installation. I don't understand, why a tool used for machine learning has it's default version compiled without GPU support?

@liyaxin999
Copy link

liyaxin999 commented Mar 14, 2023

Same issue here, fresh installation. I don't understand, why a tool used for machine learning has it's default version compiled without GPU support?

It could be the wrong cuda version or it cannot find the correct cuda path.
In my case, it's because it cannot find the correct cuda path, it is finding libcudart.so, so I just expose the file path, it works
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/' >> ~/.bashrc
hope it helps.

@ThatCoffeeGuy
Copy link

Thanks! I reinstalled CUDA and it worked on Fedora. The warning message is misleading, means completely different than what is actually happening in this scenario.

This helped me:

https://unix.stackexchange.com/questions/716248/how-do-i-use-cuda-toolkit-nvcc-11-7-1-on-fedora-36

He's installing from 35 repo to 36, I installed from 36 repo to 37 and it worked.

@njacobson-nci
Copy link

njacobson-nci commented Mar 16, 2023

I am getting the same error with the following Dockerfile:

FROM pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime

RUN apt update && apt install -y wget

RUN pip install bitsandbytes

RUN https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py && python check_bnb_install.py

I changed this dockerfile a bit and still ran into the same issue. I'm seeing problems in containers built off this jupyterlab repo on more than one VM with P100s and virtualized A100s. EDIT - The VM issues I was seeing were related to permissions of switching users within the jupyter images and unrelated to bitsandbytes. This dockerfile does still have issues even when running "python bitsandbytes -m" instead of the check_bnb_install.py script.

FROM pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime

RUN apt update && apt install -y wget

RUN pip install bitsandbytes

RUN wget https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py 

CMD nvidia-smi && python check_bnb_install.py

docker build -t bitsandbytes_test:latest .
docker run --gpus all bitsandbytes_test:latest > test_out.txt 2>&1

Thu Mar 16 05:18:41 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03   Driver Version: 510.108.03   CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla P100-PCIE...  Off  | 00000000:0B:00.0 Off |                    0 |
| N/A   27C    P0    25W / 250W |      4MiB / 16384MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla P100-PCIE...  Off  | 00000000:13:00.0 Off |                    0 |
| N/A   26C    P0    25W / 250W |      4MiB / 16384MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/nvidia/lib'), PosixPath('/usr/local/nvidia/lib64')}
  warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /usr/local/nvidia/lib:/usr/local/nvidia/lib64 did not contain libcudart.so as expected! Searching further paths...
  warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')}
  warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
  warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
  warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
Traceback (most recent call last):
  File "/workspace/check_bnb_install.py", line 14, in <module>

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA SETUP: Highest compute capability among GPUs detected: 6.0
CUDA SETUP: Detected CUDA version 116
CUDA SETUP: Loading binary /opt/conda/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
    adam.step()
  File "/opt/conda/lib/python3.10/site-packages/torch/optim/optimizer.py", line 140, in wrapper
    out = func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/bitsandbytes/optim/optimizer.py", line 263, in step
    self.update_step(group, p, gindex, pindex)
  File "/opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/bitsandbytes/optim/optimizer.py", line 456, in update_step
    F.optimizer_update_32bit(
  File "/opt/conda/lib/python3.10/site-packages/bitsandbytes/functional.py", line 752, in optimizer_update_32bit
    if optimizer_name not in str2optimizer32bit:
NameError: name 'str2optimizer32bit' is not defined

@xpgx1
Copy link

xpgx1 commented Mar 18, 2023

Hmm. I'm stuck on a WSL installation on the last step, too. The only thing throwing out errors is the bitsandbytes package - that either has no GPU support (which is hilarious to me in this case) - or is deprecated. The symlink workaround is something I'm not overly fond of doing. How to replace the deprecated file once you "updated" it in WSL (running latest ubuntu as a distro)?

@oobabooga
Copy link

This solved it for me:

oobabooga/text-generation-webui#400 (comment)

@FrancescoSaverioZuppichini

tried a lot of stuff but I still have this issue (on a dockerfile based on ghcr.io/pytorch/pytorch-nightly)

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/nvidia/lib'), PosixPath('/usr/local/nvidia/lib64')}
  warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /usr/local/nvidia/lib:/usr/local/nvidia/lib64 did not contain libcudart.so as expected! Searching further paths...
  warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')}
  warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
  warn(msg)
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /opt/conda/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning.

Not sure why is so hard to use a tool made for CUDA on a CUDA-enabled machine, is there a specific reason? I'd love to help but I am a noob at this stuff

@LeoArtaza
Copy link

LeoArtaza commented Mar 21, 2023

try pip install bitsandbytes-cuda116

This is what worked for me, but because I installed torch with cuda 11.7 I assume I have to change it to "bitsandbytes-cuda117". No more warning so for now it seems that it was the solution.

Edit: I got a new error message after initializing the environment again. Apparently this issue has to do with Pytorch just updating to 2.0, so the solution that finally worked for me was installing the last version of pytorch prior to 2.0 as said here oobabooga/text-generation-webui#400 (comment)

@FrancescoSaverioZuppichini

I think I got why is not working and how to fix it people

So, bitsandbytes will use the CUDA version you have installed, torch ships with its own cuda version. To be sure you are using the right cuda version, e.g. 11.8, you can use docker with nvidia docker and create a container with the correct cuda version. e.g. in a Dockerfile

FROM nvidia/cuda:11.8.0-devel-ubuntu22.04
....

@carlose2108
Copy link

carlose2108 commented Mar 22, 2023

Hi there!

Could anybody solve this issue?

If someone has a Dockerfile maybe could provide it here to understand how to solve this annoying issue.

Thanks a lot.

@FrancescoSaverioZuppichini
Copy link

FROM nvidia/cuda:11.8.0-devel-ubuntu22.04
ARG DEBIAN_FRONTEND=noninteractive
# since bytesands will use the installed cuda version, I have fixed to 11.8 and cannot use easily torch2.0 or nvidia with pytorch containers
RUN apt-get update && apt-get install -y \
    git \
    curl \
    software-properties-common \
    && add-apt-repository ppa:deadsnakes/ppa \
    && apt install -y python3.10 \
    && rm -rf /var/lib/apt/lists/* \
    && curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10 \
    && python3.10 -m pip install bitsandbytes

@kno10
Copy link

kno10 commented Mar 23, 2023

LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/ python script.py

helped for me, although this standard path should be searched by default.

@FrancescoSaverioZuppichini

I strongly believe it's better to just use docker so you don't messed up your host cuda version (expecially if you are using windows (🤮 ) and you game on it)

@njacobson-nci
Copy link

Does bitsandbytes require the nvidia development container as a base instead of runtime?

This container works:

FROM nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python3.9 
RUN apt-get install -y python3-pip

RUN pip install --no-cache-dir torch torchvision torchaudio torchviz --extra-index-url https://download.pytorch.org/whl/cu116
RUN pip install bitsandbytes==0.37.2

CMD nvidia-smi && python3 -m bitsandbytes

However using the base image
nvidia/cuda:11.6.2-cudnn8-runtime-ubuntu20.04
Fails

It seems like the pytorch image I was using before is also starting from the nvidia runtime image.

@FrancescoSaverioZuppichini

@njacobson-nci the runtime one works for me, maybe try cuda 11.8

@njacobson-nci
Copy link

@FrancescoSaverioZuppichini Same thing in cuda 11.8
This works, but using FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu20.04 gives an error. ( Can't find cudart.so and str2optimizer32bit is undefined )

FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python3.9 
RUN apt-get install -y python3-pip

RUN pip install --no-cache-dir torch torchvision torchaudio torchviz --extra-index-url https://download.pytorch.org/whl/cu118
RUN pip install bitsandbytes==0.37.2

CMD nvidia-smi && python3 -m bitsandbytes

@satyajitghana
Copy link

Same here, its working on devel image, but on runtime image its failing.

@penthoy
Copy link

penthoy commented May 22, 2023

Possible solution:
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
Run above first(took a long time even with help of gpt4 to find this solution)

This will add the path specific to WSL into the search path.

@sambar1729
Copy link

wow this issue has been an amazing timesuck especially when I am trying to place this in a gpu docker image. I hope this gets streamlined in the next few weeks.

@satyajitghana
Copy link

i think you should be able to fix it if you use GPU at docker build time, something like this: https://stackoverflow.com/a/61737404/13156539

@chirico85
Copy link

FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python3.9
RUN apt-get install -y python3-pip

RUN pip install --no-cache-dir torch torchvision torchaudio torchviz --extra-index-url https://download.pytorch.org/whl/cu118
RUN pip install bitsandbytes==0.37.2

CMD nvidia-smi && python3 -m bitsandbytes

So for everyone who is also working on running the package on Docker. On Ubuntu at least, I had to make sure that my local Cuda version was the same or above the one installed with the Docker Image. So for the example above Cuda needs >= 11.8.

I am trying to create the docker container in windows with the above dockerfile. First nvidia-smi shows CUDA Version: 11.6. So that is why I changed above cuda:11.8.0 to cuda:11.6.0 and https://download.pytorch.org/whl/cu118 to https://download.pytorch.org/whl/cu116.

After building the Image and then running with docker run -it --name containername --gpus all -p 3000:3000 imagename I get the following:

Unbenannt

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py:107: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 103: integrity checks failed (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
/usr/local/lib/python3.8/dist-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++ DEBUG INFORMATION +++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++ POTENTIALLY LIBRARY-PATH-LIKE ENV VARS ++++++++++
'LIBRARY_PATH': '/usr/local/cuda/lib64/stubs'
'HTTPS_PROXY': mycreds
'no_proxy': '.test.example.com,.example.org,127.0.0.0/8'
'LD_LIBRARY_PATH': '/usr/local/nvidia/lib:/usr/local/nvidia/lib64'
'NO_PROXY': '
.test.example.com,.example.org,127.0.0.0/8'
'https_proxy': mycreds
'http_proxy': mycreds
'HTTP_PROXY': mycreds
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

WARNING: Please be sure to sanitize sensible info from any such env vars!

++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
COMPILED_WITH_CUDA = False
COMPUTE_CAPABILITIES_PER_GPU = ['6.1']
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Running a quick check that:
+ library is importable
+ CUDA function is callable

Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 103: integrity checks failed

Above we output some debug information. Please provide this info when creating an issue via https://github.com/TimDettmers/bitsandbytes/issues/new/choose ...

Has anyone experienced this?

@jonataslaw
Copy link

Possible solution: export LD_LIBRARY_PATH=/usr/lib/wsl/lib:/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH Run above first(took a long time even with help of gpt4 to find this solution)

This will add the path specific to WSL into the search path.

windows + wsl only this works to me

@FrancescoSaverioZuppichini

wow this issue has been an amazing timesuck especially when I am trying to place this in a gpu docker image. I hope this gets streamlined in the next few weeks.

it is but you just need to use a container with the correct cuda version

@chirico85
Copy link

In my case I had to change some versions in:
FROM nvidia/cuda:11.0.3-cudnn8-devel-ubuntu20.04
RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

@changye-chen
Copy link

Another workaround is to symlink libcuda into your env

ln -s /usr/lib/wsl/lib/libcuda.so [path to your env here]/lib/libcuda.so
``

that works for me

@julian-tonita
Copy link

I saw this warning when doing docker run on an image where I had installed bitsandbytes using RUN pip install bitsandbytes. I found that the error went away when I did docker run --gpus=all .... Not sure if that will help anyone else, but figured I'd mention it as I didn't see it mentioned elsewhere.

@greeshmasmenon
Copy link

Another workaround is to symlink libcuda into your env

ln -s /usr/lib/wsl/lib/libcuda.so [path to your env here]/lib/libcuda.so
``

that works for me

What do you mean [path to your env here] ?

@FrancescoSaverioZuppichini

fam, do not do weird symlink on your OS, use docker!

@sx8469
Copy link

sx8469 commented Sep 13, 2023

I'm having the same issue but I'm working on a Databricks notebook:

These are the versions of the packages I have:
accelerate==0.21.0
bitsandbytes==0.40.2
transformers==4.31.0
torch==1.13.1

And as I mentioned I'm running the code on Databricks Notebook.
Where the CUDA version is 11.4
A100 GPU

For me aswell, torch.cuda.is_available() returns True.

None of the solutions worked for me, can someone please help?

And while importing bitandbytes I get this output:

False
'CUDASetup' object has no attribute 'cuda_available'
[REDACTED]/python/lib/python3.9/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "

And the output for python -m bitsandbytes is:

Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 188, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/usr/lib/python3.9/runpy.py", line 147, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "/usr/lib/python3.9/runpy.py", line 111, in _get_module_details
    __import__(pkg_name)
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/__init__.py", line 6, in <module>
    from . import cuda_setup, utils, research
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/research/__init__.py", line 1, in <module>
    from . import nn
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/research/nn/__init__.py", line 1, in <module>
    from .modules import LinearFP8Mixed, LinearFP8Global
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/research/nn/modules.py", line 8, in <module>
    from bitsandbytes.optim import GlobalOptimManager
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/optim/__init__.py", line 6, in <module>
    from bitsandbytes.cextension import COMPILED_WITH_CUDA
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cextension.py", line 13, in <module>
    setup.run_cuda_setup()
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 120, in run_cuda_setup
    binary_name, cudart_path, cc, cuda_version_string = evaluate_cuda_setup()
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 337, in evaluate_cuda_setup
    cudart_path = determine_cuda_runtime_lib_path()
  File "[REACTED]python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 295, in determine_cuda_runtime_lib_path
    cuda_runtime_libs.update(find_cuda_lib_in(value))
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 232, in find_cuda_lib_in
    resolve_paths_list(paths_list_candidate)
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 227, in resolve_paths_list
    return remove_non_existent_dirs(extract_candidate_paths(paths_list_candidate))
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 201, in remove_non_existent_dirs
    raise exc
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 197, in remove_non_existent_dirs
    if path.exists():
  File "/usr/lib/python3.9/pathlib.py", line 1414, in exists
    self.stat()
  File "/usr/lib/python3.9/pathlib.py", line 1222, in stat
    return self._accessor.stat(self)
PermissionError: [Errno 13] Permission denied: '/databricks/spark/scripts/mlflow_python.sh'

@vgudavarthi
Copy link

I ran into this as well, in my case it turned out to be I installed a pytorch without GPU support. Reinstalling pytorch for GPU solved it.

@swumagic
Copy link

Bitsandbytes was not supported windows before, but my method can support windows.(yuhuang)
1 open folder J:\StableDiffusion\sdwebui,Click the address bar of the folder and enter CMD
or WIN+R, CMD 。enter,cd /d J:\StableDiffusion\sdwebui
2 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes

3 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes-windows

4 J:\StableDiffusion\sdwebui\py310\python.exe -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl

Replace your SD venv directory file(python.exe Folder) here(J:\StableDiffusion\sdwebui\py310)

@swumagic
Copy link

OR you are Linux distribution (Ubuntu, MacOS, etc.)system ,AND CUDA Version: 11.X.

Bitsandbytes can support ubuntu.(yuhuang)
1 open folder J:\StableDiffusion\sdwebui,Click the address bar of the folder and enter CMD
or WIN+R, CMD 。enter,cd /d J:\StableDiffusion\sdwebui
2 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes

3 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes-windows

4 J:\StableDiffusion\sdwebui\py310\python.exe -m pip install https://github.com/TimDettmers/bitsandbytes/releases/download/0.41.0/bitsandbytes-0.41.0-py3-none-any.whl

Replace your SD venv directory file(python.exe Folder) here(J:\StableDiffusion\sdwebui\py310)

@sx8469
Copy link

sx8469 commented Nov 13, 2023

I'm having the same issue but I'm working on a Databricks notebook:

These are the versions of the packages I have: accelerate==0.21.0 bitsandbytes==0.40.2 transformers==4.31.0 torch==1.13.1

And as I mentioned I'm running the code on Databricks Notebook. Where the CUDA version is 11.4 A100 GPU

For me aswell, torch.cuda.is_available() returns True.

None of the solutions worked for me, can someone please help?

And while importing bitandbytes I get this output:

False
'CUDASetup' object has no attribute 'cuda_available'
[REDACTED]/python/lib/python3.9/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "

And the output for python -m bitsandbytes is:

Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 188, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/usr/lib/python3.9/runpy.py", line 147, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "/usr/lib/python3.9/runpy.py", line 111, in _get_module_details
    __import__(pkg_name)
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/__init__.py", line 6, in <module>
    from . import cuda_setup, utils, research
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/research/__init__.py", line 1, in <module>
    from . import nn
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/research/nn/__init__.py", line 1, in <module>
    from .modules import LinearFP8Mixed, LinearFP8Global
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/research/nn/modules.py", line 8, in <module>
    from bitsandbytes.optim import GlobalOptimManager
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/optim/__init__.py", line 6, in <module>
    from bitsandbytes.cextension import COMPILED_WITH_CUDA
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cextension.py", line 13, in <module>
    setup.run_cuda_setup()
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 120, in run_cuda_setup
    binary_name, cudart_path, cc, cuda_version_string = evaluate_cuda_setup()
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 337, in evaluate_cuda_setup
    cudart_path = determine_cuda_runtime_lib_path()
  File "[REACTED]python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 295, in determine_cuda_runtime_lib_path
    cuda_runtime_libs.update(find_cuda_lib_in(value))
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 232, in find_cuda_lib_in
    resolve_paths_list(paths_list_candidate)
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 227, in resolve_paths_list
    return remove_non_existent_dirs(extract_candidate_paths(paths_list_candidate))
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 201, in remove_non_existent_dirs
    raise exc
  File "[REACTED]/python/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 197, in remove_non_existent_dirs
    if path.exists():
  File "/usr/lib/python3.9/pathlib.py", line 1414, in exists
    self.stat()
  File "/usr/lib/python3.9/pathlib.py", line 1222, in stat
    return self._accessor.stat(self)
PermissionError: [Errno 13] Permission denied: '/databricks/spark/scripts/mlflow_python.sh'

[SOLVED] it turns out that Databricks Cluster which were using multi-user cluster and it only has user level acess, once I got a single-user cluster it worked fine because it has admin level acess.

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

@ravindranantony
Copy link

It seems to work after I replace lib/python3.8/site-packages/bitsandbytes/lib/bitsandbytes_cpu.so with lib/python3.8/site-packages/bitsandbytes/lib/bitsandbytes_cuda112.so

where could i find the cuda112 file

@lunaryan
Copy link

Possible solution: export LD_LIBRARY_PATH=/usr/lib/wsl/lib:/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH Run above first(took a long time even with help of gpt4 to find this solution)

This will add the path specific to WSL into the search path.

This works for me as python -m bitsandbytes reports success. However, when I run the real python script, it says Unknown CUDA exception! Please check your CUDA install. It might also be that your GPU is too old and keeps using the cpu version.

@bcicc
Copy link

bcicc commented Mar 27, 2024

kept getting this issue even after deleting and recreating conda environment, reinstalling bitsandbytes, etc. Solution for me ended up being just:
pip cache purge
conda clean --all
And then proceding to create the conda environment and pip installing requirements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests