-
Notifications
You must be signed in to change notification settings - Fork 573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation succeed with CUDA 12.3, but libcudart.so is not found #956
Comments
GOOGLE COLAB: !apt-get update |
Hi @josemerinom, I prefer to use CUDA 12.3 if possible. I've tried to add the suggested paths to
|
I just tried installing |
I have the same issue on my system with CUDA 12.2. @IamGianluca Which version have installed ? |
I can successfully install and use
In any more recent image (e.g., |
I am also facing Same Issue after compilation from source also, i am using python3.11.3, cuda 12.3 , torch 2.1.2, lion-pytorch 0.1.2with centos 7. Please help to resolve issue. libbitsandbytes_cuda121_nocublaslt.so. is not getting created ===================================BUG REPORT===================================The following directories listed in your path were found to be non-existent: {PosixPath('vs/workbench/api/node/extensionHostProcess')} ================================================ERROR=====================================
CUDA SETUP: Something unexpected happened. Please compile from source: |
I had the same issue, I upgraded the version of bitsandbytes to 0.42.0 and it worked. |
Yes i am using newly updated bitsandbytes 0.42.0 compilled it from source but still facing same issue. Not able to traceback error or what is the problem. Also I tried make CUDA_VERSION=121 it gives error as ".....error: macro "NV_IF_TARGET" passed 3 arguments, but takes just 2 for check: python -m bitsandbytes False ===================================BUG REPORT=================================== python -m bitsandbytes warn(msg)/VM_Data/Generative_AI_test/bitsandbytes/bitsandbytes/cuda_setup/main.py:167: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda-12.3/libcudart.so'), PosixPath('/usr/local/openssl/lib/libcudart.so')}.. We select the PyTorch default libcudart.so, which is {torch.version.cuda},but this might missmatch with the CUDA version that is needed for bitsandbytes.To override this behavior set the BNB_CUDA_VERSION=<version string, e.g. 122> environmental variableFor example, if you want to use the CUDA version 122BNB_CUDA_VERSION=122 python ...OR set the environmental variable in your .bashrc: export BNB_CUDA_VERSION=122In the case of a manual override, make sure you set the LD_LIBRARY_PATH, e.g.export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2 ================================================ERROR=====================================
CUDA SETUP: Something unexpected happened. Please compile from source:
|
I too have the same problem! I've pip installed it onto my virtual environment and i triple checked my PATH for my CUDA library, but it seems the problem still continues. Currently I am using python3.11.3, cuda 11.8 , torch 2.1.2+cu118. Is there a way I can fix this? ===================================BUG REPORT===================================The following directories listed in your path were found to be non-existent: {WindowsPath('AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAGM6/x5/Irk6FENB/hhKDMAQAAAACAAAAAAAQZgAAAAEAACAAAAA6DcBej+z/GNccXl+p+rfboWsa+8VjZq7vengAtnrPnAAAAAAOgAAAAAIAACAAAABg8++o9nrqYEnjyDTIP/tsUrXJLFKWwX0IDqsiakjVymAAAADI+L8dQC1K/r0d8iZ05+L5P8jhdZXSf4re8ecdxhZU1d9Yg1NlFWbc0e9tpzrFmRNSYrUywdySaDwB2KTl4B/DNrplPjb1tzbp2B1XuJRrxf/ygOcmT+TdEDSdkGh0Q1dAAAAAS5aEJuC9y4vHvHRgQA4/7fPEM2BGiDYGWqS0+pa3IhyDb4xeWEvmsGINoaH6neX0PVKBJRoYsh8FvpAYAGc7Ag==')} python -m bitsandbytes warn(msg)RuntimeError Traceback (most recent call last) File c:\Users\dkim.CENSEO\OneDrive - Censeo Consulting Group\Desktop\python_environment_llm.venv\Lib\site-packages\bitsandbytes_init_.py:6 File c:\Users\dkim.CENSEO\OneDrive - Censeo Consulting Group\Desktop\python_environment_llm.venv\Lib\site-packages\bitsandbytes\research_init_.py:1 File c:\Users\dkim.CENSEO\OneDrive - Censeo Consulting Group\Desktop\python_environment_llm.venv\Lib\site-packages\bitsandbytes\research\nn_init_.py:1 File c:\Users\dkim.CENSEO\OneDrive - Censeo Consulting Group\Desktop\python_environment_llm.venv\Lib\site-packages\bitsandbytes\research\nn\modules.py:8 File c:\Users\dkim.CENSEO\OneDrive - Censeo Consulting Group\Desktop\python_environment_llm.venv\Lib\site-packages\bitsandbytes\optim_init_.py:6 File c:\Users\dkim.CENSEO\OneDrive - Censeo Consulting Group\Desktop\python_environment_llm.venv\Lib\site-packages\bitsandbytes\cextension.py:20 RuntimeError:
|
Judging by the newness of this issue, maybe it's because bitsandbytes does not support cuda 12.3? What if you downgrade cuda |
i resolved my problem, its now running. Things to do are: first check gcc version. If it is upgraded run "strings /usr/lib64/libstdc++.so.6 | grep 'CXXABI'" or else update gcc. After updatation check CXXABI to be compatible. |
Pretty much the same behaviour for me, I cant get it work both building the package from source or installing via |
System Info
Hi,
I'm running
bitsandbytes
from a Docker container based on thenvcr.io/nvidia/pytorch:23.12-py3
Docker image. I've installed the library from source, with the following commands:The installation completes successfully, and I can import
bitsandbytes
from the Python interpreter.However, executing the following code snippet throws the following error.
I've tried to add
/usr/local/cuda/bin
toLD_LIBRARY_PATH
withexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/bin/
, but the error remains.Reproduction
from transformers import AutoTokenizer, AutoModelForCausalLM
model = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model, device_map="auto", load_in_4bit=True)
Expected behavior
I should be able to load the model in 4bit.
The text was updated successfully, but these errors were encountered: