Fix TPU (torch_xla) compatibility Error about tensor repeat func along with empty dim. #12770
+44
−30
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
In src/diffusers/models/transformers/transformer_z_image.py, there exists a image_padding_len for padding image to be multiple of SEQ_MULTI_OF. However, when the image shape is already multiple of SEQ_MULTI_OF, this will create a tensor with zero shape.
This triggers INVALID_ARGUMENT: Concatenate expects at least one argument. fortorch_xla.device()on TPU. This error currently won't emerge when changing device tocpuorcuda, but is in trouble withxlaon TPU.Fixes #12742 and #12743. And it's a final version of #12743 and built upon it.
Fixes # (issue)
torch_xla.Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
This pr:
Minimal Test Cases in TPU
You could try this on Colab when changing to TPU backend:
And you would get like: