New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0.0.18
] memory_efficient_attention
NaNs when seqlen>32768
#719
Comments
I hit this issue too yesterday after upgrading to 0.0.18. |
0.0.18
] xformers.ops.memory_efficient_attention returns NaN on certain input shapes
Hi, |
It seems to happen due to a cast to int16 at some point in the code, so it happens when the sequence length is larger than |
0.0.18
] xformers.ops.memory_efficient_attention returns NaN on certain input shapes0.0.18
] xformers.ops.memory_efficient_attention returns NaN when seqlen>32768
0.0.18
] xformers.ops.memory_efficient_attention returns NaN when seqlen>32768
0.0.18
] memory_efficient_attention
NaNs when seqlen>32768
I have a tentative fix - hopefully we can land that soon and release the |
It should be fixed as of 68dce69, and will be included in the next release (0.0.19). In the meantime, you can also use a development build |
Thanks a mille for the fix @danthe3rd - very helpful thread here! |
馃悰 Bug
Command
To Reproduce
Steps to reproduce the behavior:
Expected behavior
out should not contain Nan.
Environment
This test was done on the free google colab with their T4 GPU using the 0.0.18 package on pip.
Additional context
It works fine on 0.0.17 but fails on 0.0.18.
People have been reporting that my ComfyUI returns black images during the VAE decoding phase when the resolution is higher than a certain amount and I have narrowed it down to this issue.
The text was updated successfully, but these errors were encountered: