-
-
Notifications
You must be signed in to change notification settings - Fork 369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Issue]: Memory_efficient error #118
Comments
what does startup show on console for torch? and how did you install xformers? during which operation does this happen? |
I had the error myself on the first installation, but it happened since I copied over the venv from the original A1111 fork. In my instance, it was fixed after wiping the venv and installing the requirements again. |
Is it like the Mac M1 is not supported yet? I have the similar issue here. memory_efficient_attention_forward gradio call: NotImplementedError |
Original issue is with Cuda, don't hijack the thread with M1. Search issues/discussions first and open new issue if needed as it's not related to this. |
I see, there is already an issue mentioned not supporting M1. What is the estimation of the timeframe to support M1? Thanks for your effort! |
I'm fully willing to support the effort, but i don't have M1 system available - I'd love to have a contributor to suggest what's needed, Same applies for AMD optimizations. |
there is no update on original issue, so closing for now and can be reopened once update is provided. |
Issue Description
NotImplementedError: No operator found for
memory_efficient_attention_forward
with inputs: query : shape=(1, 4096, 1, 512) (torch.float16) key : shape=(1, 4096, 1, 512) (torch.float16) value : shape=(1, 4096, 1, 512) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0cutlassF
is not supported because: xFormers wasn't build with CUDA supportflshattF
is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 128tritonflashattF
is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 128 triton is not available requires A100 GPUsmallkF
is not supported because: xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 512It runs but doesn't show any image, just that error.
Platform Description
Win 10. Python 3.10.9. Torch 2 (Installed from wiki guide). No warnings at launch.py start.
The text was updated successfully, but these errors were encountered: