-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support elementwise add / mul for [B, *] nested, [B, 1] dense (CUDA only) #95620
Conversation
…nly) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/95620
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 92091bf: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…nly) ghstack-source-id: 0d3907b0ccb3f18c6f56c8d85d737f3964d97f83 Pull Request resolved: #95620
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does autograd work with the broadcasting support? I dont' remember if we added support for the esuhm(maybe we remove this name from the code, idk) case. But if so we could add an autograd test that covers
there's no autograd support for this case, and yea the name should probably be removed from the code, good catch |
…nse (CUDA only)" Small hack to reuse the ESUHM kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an ask from the Ads team. Future work: full general broadcasting support between mixed nested / dense. cc cpuhrsch bhosmer drisspg mikaylagawarecki [ghstack-poisoned]
…nly) ghstack-source-id: dbe6d7e23ebf514f6f1845ec56a04b365334e096 Pull Request resolved: #95620
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎸
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…nly) (#95620) Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask. Future work: full general broadcasting support between mixed nested / dense. Pull Request resolved: pytorch/pytorch#95620 Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
…nly) (#95620) Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask. Future work: full general broadcasting support between mixed nested / dense. Pull Request resolved: pytorch/pytorch#95620 Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
…nly) (#95620) Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask. Future work: full general broadcasting support between mixed nested / dense. Pull Request resolved: pytorch/pytorch#95620 Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
…nly) (#95620) Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask. Future work: full general broadcasting support between mixed nested / dense. Pull Request resolved: pytorch/pytorch#95620 Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
…nly) (#95620) Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask. Future work: full general broadcasting support between mixed nested / dense. Pull Request resolved: pytorch/pytorch#95620 Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
… (CUDA only) (pytorch#95620)" This reverts commit 68eec90.
…nly) (pytorch#95620) Small hack to reuse the 3D custom kernel from pytorch#88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask. Future work: full general broadcasting support between mixed nested / dense. Pull Request resolved: pytorch#95620 Approved by: https://github.com/cpuhrsch, https://github.com/drisspg
Stack from ghstack (oldest at bottom):
Small hack to reuse the 3D custom kernel from #88289 for [B, *] nested, [B, 1] dense elementwise add / mul. Simply treat the inputs as [B, *, 1], [B, 1, 1]. This is added to satisfy an internal ask.
Future work: full general broadcasting support between mixed nested / dense.
cc @cpuhrsch @bhosmer @drisspg @mikaylagawarecki