New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix aminmax output resize issue when input is a zero dimension tensor #96171
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/96171
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit efc5fa6: NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: 9689098389a896779c3fc31d8605971516667437 Pull Request resolved: #96171
You'd also need to remove special handling in decomp now that CPU doesn't have inconsistent behavior |
…sion tensor" Fix #96042 ### before ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.) torch.return_types.aminmax( min=tensor([1]), max=tensor([1])) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` ### after ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10 [ghstack-poisoned]
ghstack-source-id: 0287fe3a6441d97de847d78e16aea8ea4fe82b7e Pull Request resolved: #96171
…sion tensor" Fix #96042 ### before ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.) torch.return_types.aminmax( min=tensor([1]), max=tensor([1])) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` ### after ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10 [ghstack-poisoned]
ghstack-source-id: 730a62aa8a575a3c9bcfebe9452a2a9df2254612 Pull Request resolved: #96171
cc @zou3519 for vmap errors. If 0d tensor is vmapped, what's the expected result? Here it seems vmap produces (2,1) whereas loop correctly produces (2). |
…sion tensor" Fix #96042 ### before ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.) torch.return_types.aminmax( min=tensor([1]), max=tensor([1])) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` ### after ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10 [ghstack-poisoned]
ghstack-source-id: 297d417f801202f53ee2579006343c3a67247131 Pull Request resolved: #96171
The expected result is whatever the loop produces (so it sounds like 2). From the comment over here, it sounds like we can just add a REDUCTION_WITH_KEEPDIM_ARG(aminmax) like this and delete this. This is not too much of a change, if you're interested in handing in this PR @mingfeima. Otherwise I can handle it later after this PR lands |
…sion tensor" Fix #96042 ### before ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.) torch.return_types.aminmax( min=tensor([1]), max=tensor([1])) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` ### after ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` Marked the following test as expected_fail: `test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32` Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched. The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2). cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10 [ghstack-poisoned]
ghstack-source-id: f6e5c57e1d08759176dc1ef596d3cb9ceec74dba Pull Request resolved: #96171
…sion tensor" Fix #96042 ### before ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.) torch.return_types.aminmax( min=tensor([1]), max=tensor([1])) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` ### after ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` Marked the following test as expected_fail: `test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32` Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched. The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2). cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10 [ghstack-poisoned]
ghstack-source-id: 06807dbce3ca2dc637dce5bfc873bb0ad95f9a13 Pull Request resolved: #96171
…sion tensor" Fix #96042 ### before ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.) torch.return_types.aminmax( min=tensor([1]), max=tensor([1])) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` ### after ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` Marked the following test as expected_fail: `test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32` Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched. The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2). cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10 [ghstack-poisoned]
ghstack-source-id: 53a1bc6012b9a4ca290c3429fd6496751e561129 Pull Request resolved: #96171
@zou3519 Thanks for the input! Update this patch according to your proposal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
@pytorchbot merge -f "jit failure unrelated" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
… (#96171) Fix pytorch/pytorch#96042 ### before ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.) torch.return_types.aminmax( min=tensor([1]), max=tensor([1])) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` ### after ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` Marked the following test as expected_fail: `test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32` Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched. The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2). Pull Request resolved: pytorch/pytorch#96171 Approved by: https://github.com/jgong5, https://github.com/ngimel, https://github.com/zou3519
… (#96171) Fix pytorch/pytorch#96042 ### before ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.) torch.return_types.aminmax( min=tensor([1]), max=tensor([1])) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` ### after ``` >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False) torch.return_types.aminmax( min=tensor(1), max=tensor(1)) ``` Marked the following test as expected_fail: `test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32` Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched. The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2). Pull Request resolved: pytorch/pytorch#96171 Approved by: https://github.com/jgong5, https://github.com/ngimel, https://github.com/zou3519
Stack from ghstack (oldest at bottom):
Fix #96042
before
after
Marked the following test as expected_fail:
test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32
Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched.
The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2).
cc @jgong5 @XiaobingSuper @sanchitintel @ashokei @jingxu10