Skip to content

Specialize BroadcastIndexesRange for the case where there is only 1 contiguous input #12023

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

swolchok
Copy link
Contributor

In this case, broadcasting is not possible if I understand correctly.

NOTE TO REVIEWERS: I deleted a failing test because I think it's testing not-actually-existent-in-PyTorch functionality. Please let me know if I've made a mistake. I tried to exercise the behavior that this test implied existed like so:

>>> t = torch.tensor([1, 2, 3])
>>> t2 = torch.tensor(4)
>>> torch.abs(t2, out=t)
<stdin>:1: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:38.)
tensor(4)

I think that if the test was correct, the result would have been torch.tensor([1, 2, 3]) with no message. Also, none of our operator tests seem to be failing. Have I missed anything?

swolchok added 4 commits June 26, 2025 13:00
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@swolchok
Copy link
Contributor Author

swolchok commented Jun 26, 2025

@swolchok swolchok requested a review from manuelcandales as a code owner June 26, 2025 21:04
Copy link

pytorch-bot bot commented Jun 26, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12023

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures, 7 Unrelated Failures

As of commit abef683 with merge base 3e19e67 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

swolchok added a commit that referenced this pull request Jun 26, 2025
…ontiguous input

In this case, broadcasting is not possible if I understand correctly.

NOTE TO REVIEWERS: I deleted a failing test because I think it's testing not-actually-existent-in-PyTorch functionality. Please let me know if I've made a mistake. I tried to exercise the behavior that this test implied existed like so:
```
>>> t = torch.tensor([1, 2, 3])
>>> t2 = torch.tensor(4)
>>> torch.abs(t2, out=t)
<stdin>:1: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:38.)
tensor(4)
```

I think that if the test was correct, the result would have been torch.tensor([1, 2, 3]) with no message. Also, none of our operator tests seem to be failing. Have I missed anything?


ghstack-source-id: ad2d09d
ghstack-comment-id: 3010027375
Pull-Request-resolved: #12023
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 26, 2025
swolchok added 5 commits June 26, 2025 14:52
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
swolchok added 4 commits June 26, 2025 23:28
[ghstack-poisoned]
[ghstack-poisoned]
swolchok added a commit that referenced this pull request Jun 28, 2025
…ontiguous input

In this case, broadcasting is not possible if I understand correctly.

NOTE TO REVIEWERS: I deleted a failing test because I think it's testing not-actually-existent-in-PyTorch functionality. Please let me know if I've made a mistake. I tried to exercise the behavior that this test implied existed like so:
```
>>> t = torch.tensor([1, 2, 3])
>>> t2 = torch.tensor(4)
>>> torch.abs(t2, out=t)
<stdin>:1: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:38.)
tensor(4)
```

I think that if the test was correct, the result would have been torch.tensor([1, 2, 3]) with no message. Also, none of our operator tests seem to be failing. Have I missed anything?


ghstack-source-id: 945dd3f
ghstack-comment-id: 3010027375
Pull-Request-resolved: #12023
@swolchok swolchok changed the base branch from gh/swolchok/479/head to gh/swolchok/484/head June 28, 2025 00:34
@swolchok swolchok added the release notes: none Do not include this in the release notes label Jun 28, 2025
@swolchok
Copy link
Contributor Author

This is a size win. Size script results below, cases with no change edited out for brevity.

test/build_size_test.sh

before:

ExecuTorch with portable ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  1377360 Jun 27 17:24 cmake-out/test/size_test_all_ops
__TEXT	__DATA	__OBJC	others	dec	hex
1064960	65536	0	4295278592	4296409088	100160000

after:

ExecuTorch with portable ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  1360464 Jun 27 17:26 cmake-out/test/size_test_all_ops
__TEXT	__DATA	__OBJC	others	dec	hex
1048576	65536	0	4295278592	4296392704	10015c000

test/build_optimized_size_test.sh

before:

ExecuTorch with portable ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  1506384 Jun 27 17:17 cmake-out/test/size_test_all_ops
__TEXT	__DATA	__OBJC	others	dec	hex
1064960	65536	0	4295393280	4296523776	10017c000
ExecuTorch with optimized ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  4958792 Jun 27 17:17 cmake-out/test/size_test_all_optimized_ops
__TEXT	__DATA	__OBJC	others	dec	hex
3702784	65536	0	4296212480	4299980800	1004c8000

after:

ExecuTorch with portable ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  1505872 Jun 27 17:28 cmake-out/test/size_test_all_ops
__TEXT	__DATA	__OBJC	others	dec	hex
1064960	65536	0	4295393280	4296523776	10017c000
ExecuTorch with optimized ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  4941448 Jun 27 17:28 cmake-out/test/size_test_all_optimized_ops
__TEXT	__DATA	__OBJC	others	dec	hex
3686400	65536	0	4296212480	4299964416	1004c4000

@swolchok swolchok temporarily deployed to upload-benchmark-results June 28, 2025 02:21 — with GitHub Actions Inactive
@swolchok swolchok temporarily deployed to upload-benchmark-results June 28, 2025 02:21 — with GitHub Actions Inactive
@swolchok swolchok temporarily deployed to upload-benchmark-results June 28, 2025 03:23 — with GitHub Actions Inactive
@swolchok swolchok temporarily deployed to upload-benchmark-results June 28, 2025 03:48 — with GitHub Actions Inactive
@mergennachin mergennachin force-pushed the gh/swolchok/484/head branch from a3f0bf7 to 665c8f0 Compare June 28, 2025 04:39
Base automatically changed from gh/swolchok/484/head to main June 28, 2025 05:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: none Do not include this in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants