Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support elementwise add / mul for [B, *] nested, [B, 1] dense (CUDA only) #95620

Closed
wants to merge 2 commits into from

Conversation

jbschlosser
Copy link
Contributor

@jbschlosser jbschlosser commented Feb 27, 2023

Stack from ghstack (oldest at bottom):

Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask.

Future work: full general broadcasting support between mixed nested / dense.

cc @cpuhrsch @bhosmer @drisspg @mikaylagawarecki

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 27, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/95620

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 92091bf:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

jbschlosser added a commit that referenced this pull request Feb 27, 2023
…nly)

ghstack-source-id: 0d3907b0ccb3f18c6f56c8d85d737f3964d97f83
Pull Request resolved: #95620
@jbschlosser jbschlosser added topic: not user facing topic category module: nestedtensor NestedTensor tag see issue #25032 and removed topic: not user facing topic category labels Feb 27, 2023
@jbschlosser jbschlosser added the release notes: nested tensor Changes that have a direct impact on nested tensors label Feb 27, 2023
Copy link
Contributor

@drisspg drisspg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does autograd work with the broadcasting support? I dont' remember if we added support for the esuhm(maybe we remove this name from the code, idk) case. But if so we could add an autograd test that covers

@mikaylagawarecki
Copy link
Contributor

there's no autograd support for this case, and yea the name should probably be removed from the code, good catch

…nse (CUDA only)"


Small hack to reuse the ESUHM kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an ask from the Ads team.

Future work: full general broadcasting support between mixed nested / dense.

cc cpuhrsch bhosmer drisspg mikaylagawarecki

[ghstack-poisoned]
jbschlosser added a commit that referenced this pull request Feb 27, 2023
…nly)

ghstack-source-id: dbe6d7e23ebf514f6f1845ec56a04b365334e096
Pull Request resolved: #95620
@jbschlosser jbschlosser added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 27, 2023
Copy link
Contributor

@drisspg drisspg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎸

@jbschlosser
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 1, 2023
…nly) (#95620)

Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask.

Future work: full general broadcasting support between mixed nested / dense.

Pull Request resolved: pytorch/pytorch#95620
Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 2, 2023
…nly) (#95620)

Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask.

Future work: full general broadcasting support between mixed nested / dense.

Pull Request resolved: pytorch/pytorch#95620
Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 5, 2023
…nly) (#95620)

Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask.

Future work: full general broadcasting support between mixed nested / dense.

Pull Request resolved: pytorch/pytorch#95620
Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 5, 2023
…nly) (#95620)

Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask.

Future work: full general broadcasting support between mixed nested / dense.

Pull Request resolved: pytorch/pytorch#95620
Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 27, 2023
…nly) (#95620)

Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask.

Future work: full general broadcasting support between mixed nested / dense.

Pull Request resolved: pytorch/pytorch#95620
Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
pruthvistony added a commit to ROCm/pytorch that referenced this pull request May 2, 2023
@facebook-github-bot facebook-github-bot deleted the gh/jbschlosser/71/head branch June 8, 2023 17:33
jhavukainen pushed a commit to kulinseth/pytorch that referenced this pull request Mar 15, 2024
…nly) (pytorch#95620)

Small hack to reuse the 3D custom kernel from pytorch#88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask.

Future work: full general broadcasting support between mixed nested / dense.

Pull Request resolved: pytorch#95620
Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged module: nestedtensor NestedTensor tag see issue #25032 release notes: nested tensor Changes that have a direct impact on nested tensors
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants