New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] No operator for 'memory_efficient_attention_forward' #412
Comments
The GPU i am using is A100 |
i had met same problem when use conda cudatoolkit |
same |
What's the cli exactly? |
@kohya-ss can you guide me on how to solve this? NotImplementedError: No operator found for `memory_efficient_attention_forward`
with inputs:
query : shape=(200, 9126, 1, 64) (torch.float32)
key : shape=(200, 9126, 1, 64) (torch.float32)
value : shape=(200, 9126, 1, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
`flshattF` is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
`tritonflashattF` is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
`cutlassF` is not supported because:
device=cpu (supported: {'cuda'})
`smallkF` is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
unsupported embed per head: 64
steps: 0%| | 0/98800 [56:21<?, ?it/s] |
same problem |
Exactly same problem but i running kohya_ss on CPU-only (no GPU).What setting do i missed? |
The text was updated successfully, but these errors were encountered: