Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need Option to Disable Flash Attention in VideoLLaMA2.1-7B-AV (SiglipVisionModel) #36819

Open
harshmoothat opened this issue Mar 19, 2025 · 1 comment

Comments

@harshmoothat
Copy link

When using SiglipVisionModel inside VideoLLaMA2.1-7B-AV, I encounter the following error:

ValueError: SiglipVisionModel does not support Flash Attention 2.0 yet.

I do not need Flash Attention for my use case and would like to disable it.
Could you provide an official way to toggle it off?

Image

@zucchini-nlp
Copy link
Member

Hmm Siglip does support FA2 as per this code block, Ca you check transformers version, and that FA2 was available for Siglip in that version? If not update to the latest

https://github.com/huggingface/transformers/blob/main/src/transformers/models/siglip/modeling_siglip.py#L439

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants