-
Notifications
You must be signed in to change notification settings - Fork 21.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu #125946
Commits on May 10, 2024
-
Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu
[ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for f3b64f7 - Browse repository at this point
Copy the full SHA f3b64f7View commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 92c3c12 - Browse repository at this point
Copy the full SHA 92c3c12View commit details
Commits on May 14, 2024
-
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 6f8f1cf - Browse repository at this point
Copy the full SHA 6f8f1cfView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for c67a25f - Browse repository at this point
Copy the full SHA c67a25fView commit details
Commits on May 17, 2024
-
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 2a1bf0a - Browse repository at this point
Copy the full SHA 2a1bf0aView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 8f00eca - Browse repository at this point
Copy the full SHA 8f00ecaView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for a948720 - Browse repository at this point
Copy the full SHA a948720View commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for d9d2f99 - Browse repository at this point
Copy the full SHA d9d2f99View commit details
Commits on May 20, 2024
-
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for b5bed58 - Browse repository at this point
Copy the full SHA b5bed58View commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 4d6b24f - Browse repository at this point
Copy the full SHA 4d6b24fView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for bc94bde - Browse repository at this point
Copy the full SHA bc94bdeView commit details
Commits on May 21, 2024
-
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 39752cf - Browse repository at this point
Copy the full SHA 39752cfView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 995442a - Browse repository at this point
Copy the full SHA 995442aView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` * `jagged_to_padded_dense_backward()` [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 794aa8b - Browse repository at this point
Copy the full SHA 794aa8bView commit details
Commits on May 22, 2024
-
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 5485e1b - Browse repository at this point
Copy the full SHA 5485e1bView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 7887b40 - Browse repository at this point
Copy the full SHA 7887b40View commit details
Commits on May 23, 2024
-
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 490d2e8 - Browse repository at this point
Copy the full SHA 490d2e8View commit details
Commits on May 24, 2024
-
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for de56815 - Browse repository at this point
Copy the full SHA de56815View commit details
Commits on Jun 3, 2024
-
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 1d4016c - Browse repository at this point
Copy the full SHA 1d4016cView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for a2162fd - Browse repository at this point
Copy the full SHA a2162fdView commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 710b5e2 - Browse repository at this point
Copy the full SHA 710b5e2View commit details -
Update on "Lift jagged -> padded dense forward / backward kernels fro…
…m fbgemm_gpu" PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`: * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()` * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()` CPU impls for these new ATen ops will be added in a follow-up PR. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 96929ac - Browse repository at this point
Copy the full SHA 96929acView commit details