New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relanding] Implemented torch.linalg.multi_dot #52859
[Relanding] Implemented torch.linalg.multi_dot #52859
Conversation
This reverts commit 92a4ee1. [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit f637190 (more details on the Dr. CI page):
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really cool; nice work!
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D26672922](https://our.internmc.facebook.com/intern/diff/D26672922) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D26672922](https://our.internmc.facebook.com/intern/diff/D26672922) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D26672922](https://our.internmc.facebook.com/intern/diff/D26672922) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D26672922](https://our.internmc.facebook.com/intern/diff/D26672922) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D26672922](https://our.internmc.facebook.com/intern/diff/D26672922) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D26672922](https://our.internmc.facebook.com/intern/diff/D26672922) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D26672922](https://our.internmc.facebook.com/intern/diff/D26672922) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D26672922](https://our.internmc.facebook.com/intern/diff/D26672922) [ghstack-poisoned]
It does not currently support broadcasting which is in line with NumPy. However, we could consider adding broadcast in a future PR. |
I am not even talking of broadcasting, just regular batching... |
My mistake, I meant to say batching. It does not support batching. |
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
…ting on "[Relanding] Implemented torch.linalg.multi_dot" This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
This reverts commit 92a4ee1. Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph. Differential Revision: [D27402390](https://our.internmc.facebook.com/intern/diff/D27402390) [ghstack-poisoned]
@heitorschueroff merged this pull request in 5d68b36. |
Stack from ghstack:
This reverts commit 92a4ee1.
Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph.
Differential Revision: D27402390