Description
bitsandbytes reports this error:
(venv) ➜ image-captioning-v2 python captionit3.py
True
False
Traceback (most recent call last):
File "/Users/b/study/ml/image-captioning-v2/captionit3.py", line 14, in <module>
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-6.7b-coco", device_map='auto', quantization_config=nf4_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/b/study/ml/image-captioning-v2/venv/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2616, in from_pretrained
raise ImportError(
ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes`
however, the error is innacurate, because the issue is that the function:
def is_bitsandbytes_available():
if not is_torch_available():
return False
# bitsandbytes throws an error if cuda is not available
# let's avoid that by adding a simple check
import torch
return _bitsandbytes_available and torch.cuda.is_available()
and if somebody accidentally uninstall torch, this happens.
So maybe one should improve the error message.
and maybe sending a message here to the user complaining about "unable to import torch" would be useful, who knows
tell your friends! :)
Activity
Its3rr0rsWRLD commentedon Oct 29, 2023
Are you on MacOS? Had the same issue on it, swapped to windows (remote ssh) and searching for a different issue lol
oushu1zhangxiangxuan1 commentedon Oct 30, 2023
I got the same ERROR
SoyGema commentedon Oct 30, 2023
Yes. Same issue on MacOS
effortprogrammer commentedon Nov 20, 2023
same issue... is there any updates?
pechaut78 commentedon Nov 27, 2023
same issue
RamsesCamas commentedon Dec 7, 2023
Same issue
pechaut78 commentedon Dec 7, 2023
Well as said above, the error is not that the lib is not properly installed: the error message is misleading.
The issue is that it is not implemented on Apple Silicon (mps)
So, bitsandbytes can not be used and code should be adapted ! sigh
Please see:
#485
github-actions commentedon Dec 31, 2023
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
TimDettmers commentedon Jan 8, 2024
This is a great catch. Can you please submit this to the transformers github repo? This is only indirectly a bitsandbytes issue.
Titus-von-Koeller commentedon Jan 26, 2024
To me it's not entirely clear where Mac comes into play and how we would best warn that Mac is not supported.
@pechaut78 how did you deduce that it must be Mac related? And why does the code get triggered that is throwing the traceback?
younesbelkada commentedon Jan 29, 2024
Hi - the core issue is that currently in transformers
is_bitsandbytes_available()
silently returns False if you don't have a CUDA device, i.e. iftorch.cuda.is_available()
: https://github.com/huggingface/transformers/blob/cd2eb8cb2b40482ae432d97e65c5e2fa952a4f8f/src/transformers/utils/import_utils.py#L623This is not ideal as we should display a more informative warning instead - @Titus-von-Koeller would be happy to open a quick PR on transformers to add a logger.info if
torch.cuda.is_available()
is False to clearly state to users thatis_bitsandbytes_available()
will silently be set toFalse
? Otherwise happy to do it as wellis_bitsandbytes_available()
huggingface/transformers#38528ved1beta commentedon Jun 2, 2025
@matthewdouglas @Titus-von-Koeller, please can you have a look here