You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using PyTorch 2.0.1 with xformers==0.0.21, there seem to be no issues with the exact same script. PyTorch was installed with pip install torch==2.0.1+cu117 --index-url https://download.pytorch.org/whl/cu117 inside a Docker image mounted from nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu20.04.
Hi,
Thanks for opening this issue.
Currently xformers does not work well with autocast (if that's what is used for mixed precision?), so we recommend that you cast all inputs to the same dtype (float16 in your case): this is also written in the error log:
ValueError: Query/Key/Value should all have the same dtype
query.dtype: torch.float32
key.dtype : torch.float16
value.dtype: torch.float16
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Related issue: #5368
When using PyTorch 2.1 and the latest stable build of
xformers
, our DreamBooth LoRA script for SDXL doesn't work. #5368 provides more details.But when using SDPA in the same environment (i.e., no
xformers
), the issue seems to go away.Dev environment for this can be found here:
https://github.com/huggingface/diffusers/blob/main/docker/diffusers-pytorch-compile-cuda/Dockerfile
When using PyTorch 2.0.1 with
xformers==0.0.21
, there seem to be no issues with the exact same script. PyTorch was installed withpip install torch==2.0.1+cu117 --index-url https://download.pytorch.org/whl/cu117
inside a Docker image mounted fromnvidia/cuda:11.7.1-cudnn8-runtime-ubuntu20.04
.Cc: @patrickvonplaten @williamberman
The text was updated successfully, but these errors were encountered: