Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potentially extra slow inference when using LoRA adapter #192

Open
sadaisystems opened this issue Jan 25, 2024 · 1 comment
Open

Potentially extra slow inference when using LoRA adapter #192

sadaisystems opened this issue Jan 25, 2024 · 1 comment

Comments

@sadaisystems
Copy link

Hello, everybody. Tried HumanEval benchmark on my custom Mistral tune today, but getting weird warning:

UserWarning: Input type into Linear4bit is torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference or training speed.
  warnings.warn(f'Input type into Linear4bit is torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference or training speed.')

Dont know how to fix this, any ideas?

My command to run the benchmark:

accelerate launch  main.py \
  --model {model_name} \
  --peft_model {peft_model_path} \
  --load_in_4bit \
  --max_length_generation 512 \
  --tasks humaneval \
  --temperature 0.2 \
  --precision bf16 \
  --n_samples 200 \
  --batch_size 32 \
  --allow_code_execution \
  --limit 25 
@sadaisystems
Copy link
Author

Seems to only occur when --load_in_4bit passed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant