Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute capability < 7.5 detected! #79

Closed
shon-otmazgin opened this issue Feb 13, 2023 · 6 comments
Closed

Compute capability < 7.5 detected! #79

shon-otmazgin opened this issue Feb 13, 2023 · 6 comments
Labels
solved solved

Comments

@shon-otmazgin
Copy link

shon-otmazgin commented Feb 13, 2023

I trying to run:

from transformers import AutoModelForSeq2SeqLM
from peft import get_peft_model, LoraConfig
model_name_or_path = "facebook/opt-13b"

peft_config = LoraConfig(
    task_type="CAUSAL_LM", inference_mode=False, r=64, lora_alpha=32, lora_dropout=0.1
)

model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
print(model.print_trainable_parameters())
model = get_peft_model(model, peft_config)
print(model.print_trainable_parameters())

but then I getting:
Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!"
NameError: name 'cuda_setup' is not defined

I understood that this error comes from bitsandbytes so I found this issue:
TimDettmers/bitsandbytes#124

My hardware is tesla v100-sxm2-32gb (Volta)

It is possible to run PEFT on my hardware? I don't really need int8, fp16 also should be good.

this is related also to my attempts to FT a large LM on my hardware:
https://discuss.huggingface.co/t/finetune-llm-with-deepspeed/31589

Thanks,
Shon

@younesbelkada
Copy link
Collaborator

Hi @shon-otmazgin
I think if you don't need bnb you can just uninstall bitsandbytes and the error should disappear

@djaym7
Copy link

djaym7 commented Mar 6, 2023

bitsandbytes 0.35.0 solves this but start another issue :
Traceback (most recent call last):
File "train_full_csv_int8Training.py", line 463, in
train(locals(),train_inputs,subtopics_train_targets,val_inputs,subtopics_val_targets)
File "train_full_csv_int8Training.py", line 400, in train
trainer = Trainer(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 382, in init
raise ValueError(
ValueError: The model you want to train is loaded in 8-bit precision. if you want to fine-tune an 8-bit model, please make sure that you have installed bitsandbytes>=0.37.0

@djaym7
Copy link

djaym7 commented Mar 6, 2023

@younesbelkada
Copy link
Collaborator

@djaym7 can you try with bitsandbytes==0.37.0 ?

@djaym7
Copy link

djaym7 commented Mar 7, 2023

No can't TimDettmers/bitsandbytes#179

@github-actions
Copy link

github-actions bot commented Apr 3, 2023

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved solved
Projects
None yet
Development

No branches or pull requests

4 participants