Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu (
#125946) PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. Pull Request resolved: #125946 Approved by: https://github.com/davidberard98
- Loading branch information
e2f0818
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted #125946 on behalf of https://github.com/clee2000 due to I think dr ci is wrong and the windows build failure is real https://hud.pytorch.org/pytorch/pytorch/commit/e2f081837f4276c1a6a37739bd28157f62004a06 https://github.com/pytorch/pytorch/actions/runs/9216826622/job/25357819877 (comment)