Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu #125946

Closed
wants to merge 22 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
f3b64f7
Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu
jbschlosser May 10, 2024
92c3c12
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 10, 2024
6f8f1cf
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 14, 2024
c67a25f
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 14, 2024
2a1bf0a
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 17, 2024
8f00eca
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 17, 2024
a948720
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 17, 2024
d9d2f99
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 17, 2024
b5bed58
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 20, 2024
4d6b24f
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 20, 2024
bc94bde
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 20, 2024
39752cf
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 21, 2024
995442a
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 21, 2024
794aa8b
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 21, 2024
5485e1b
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 22, 2024
7887b40
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 22, 2024
490d2e8
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 23, 2024
de56815
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser May 24, 2024
1d4016c
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser Jun 3, 2024
a2162fd
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser Jun 3, 2024
710b5e2
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser Jun 3, 2024
96929ac
Update on "Lift jagged -> padded dense forward / backward kernels fro…
jbschlosser Jun 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions aten/src/ATen/native/native_functions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14644,6 +14644,16 @@
NestedTensorCUDA: NestedTensor_to_padded_tensor_cuda
autogen: to_padded_tensor.out

- func: _jagged_to_padded_dense_forward(Tensor values, Tensor[] offsets, SymInt[] max_lengths, float padding_value=0.0) -> Tensor
variants: function
dispatch:
CUDA: _fbgemm_jagged_to_padded_dense_forward

- func: _padded_dense_to_jagged_forward(Tensor dense, Tensor[] offsets, SymInt? total_L=None) -> Tensor
variants: function
dispatch:
CUDA: _fbgemm_dense_to_jagged_forward_symint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, some variants of padded_dense_to_jagged_forward also take a padding value, in case any of your sequences are longer than the sequence dimension of the dense tensor. I guess we probably don't care too much about this right now though?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a good point - I think for our initial op coverage purposes, we don't care too much. we may need to revisit this for more general usages though


- func: _nested_tensor_softmax_with_shape(Tensor self, Tensor query) -> Tensor
dispatch:
NestedTensorCPU: NestedTensor_softmax_dropout
Expand Down
Loading
Loading