-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Fix concat with empty tensors #87620
[ONNX] Fix concat with empty tensors #87620
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/87620
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit 000906e: The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
torch/onnx/symbolic_opset9.py
Outdated
# will likely fail due to inputs with different ranks (0 for empty tensor, > 0 for anything else) | ||
nonempty_tensors = [] | ||
for t in tensors: | ||
if symbolic_helper._is_constant(t) and not symbolic_helper._get_tensor_dim_size(t, 0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should tensors with 0 in their dimension size count as "empty tensor" under this context? For example torch.ones(2, 0, 3)
. (Maybe create a helper function is_empty_tensor
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good question. How would we detect one scenario or the other? both would have value 0
b32c981
to
9409ef5
Compare
9409ef5
to
000906e
Compare
@pytorchbot merge -f "unrelated pipeline failure" |
@pytorchbot merge -f "unrelated pipeline failure" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Fixes pytorch#54410 Pull Request resolved: pytorch#87620 Approved by: https://github.com/BowenBao
Fixes #54410