You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that models/modeling_llama.py is based on the code from here However, I found that your implementation does not support Flash Attention 2. Therefore, I would like to request further modifications. To facilitate this process, could you please specify the exact version of transformers that your implementation is based on? This will make it easier for me to perform a comparison.
The text was updated successfully, but these errors were encountered:
Hi, I tried to use your suggestion, and when using the code from demo.ipynb to generate texts with flash-attn-2 and 8bit(with your low resource mode on), it alerts me that RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16. I tried to load the LLaMA model in 8bit only and it functions well. Do you have any suggestions?
My test
I search on the Internet and found here and used the method with torch.cuda.amp.autocast():, and it alerts RuntimeError: query and key must have the same dtype, hope it helps.
I noticed that
models/modeling_llama.py
is based on the code from here However, I found that your implementation does not support Flash Attention 2. Therefore, I would like to request further modifications. To facilitate this process, could you please specify the exact version oftransformers
that your implementation is based on? This will make it easier for me to perform a comparison.The text was updated successfully, but these errors were encountered: