Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu #125946

Closed
wants to merge 22 commits into from

Commits on May 10, 2024

  1. Configuration menu
    Copy the full SHA
    f3b64f7 View commit details
    Browse the repository at this point in the history
  2. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 10, 2024
    Configuration menu
    Copy the full SHA
    92c3c12 View commit details
    Browse the repository at this point in the history

Commits on May 14, 2024

  1. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 14, 2024
    Configuration menu
    Copy the full SHA
    6f8f1cf View commit details
    Browse the repository at this point in the history
  2. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 14, 2024
    Configuration menu
    Copy the full SHA
    c67a25f View commit details
    Browse the repository at this point in the history

Commits on May 17, 2024

  1. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 17, 2024
    Configuration menu
    Copy the full SHA
    2a1bf0a View commit details
    Browse the repository at this point in the history
  2. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 17, 2024
    Configuration menu
    Copy the full SHA
    8f00eca View commit details
    Browse the repository at this point in the history
  3. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 17, 2024
    Configuration menu
    Copy the full SHA
    a948720 View commit details
    Browse the repository at this point in the history
  4. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 17, 2024
    Configuration menu
    Copy the full SHA
    d9d2f99 View commit details
    Browse the repository at this point in the history

Commits on May 20, 2024

  1. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 20, 2024
    Configuration menu
    Copy the full SHA
    b5bed58 View commit details
    Browse the repository at this point in the history
  2. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 20, 2024
    Configuration menu
    Copy the full SHA
    4d6b24f View commit details
    Browse the repository at this point in the history
  3. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 20, 2024
    Configuration menu
    Copy the full SHA
    bc94bde View commit details
    Browse the repository at this point in the history

Commits on May 21, 2024

  1. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 21, 2024
    Configuration menu
    Copy the full SHA
    39752cf View commit details
    Browse the repository at this point in the history
  2. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 21, 2024
    Configuration menu
    Copy the full SHA
    995442a View commit details
    Browse the repository at this point in the history
  3. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()`
    * `jagged_to_padded_dense_backward()`
    
    [ghstack-poisoned]
    jbschlosser committed May 21, 2024
    Configuration menu
    Copy the full SHA
    794aa8b View commit details
    Browse the repository at this point in the history

Commits on May 22, 2024

  1. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()`
    
    CPU impls for these new ATen ops will be added in a follow-up PR.
    
    [ghstack-poisoned]
    jbschlosser committed May 22, 2024
    Configuration menu
    Copy the full SHA
    5485e1b View commit details
    Browse the repository at this point in the history
  2. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()`
    
    CPU impls for these new ATen ops will be added in a follow-up PR.
    
    [ghstack-poisoned]
    jbschlosser committed May 22, 2024
    Configuration menu
    Copy the full SHA
    7887b40 View commit details
    Browse the repository at this point in the history

Commits on May 23, 2024

  1. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()`
    
    CPU impls for these new ATen ops will be added in a follow-up PR.
    
    [ghstack-poisoned]
    jbschlosser committed May 23, 2024
    Configuration menu
    Copy the full SHA
    490d2e8 View commit details
    Browse the repository at this point in the history

Commits on May 24, 2024

  1. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()`
    
    CPU impls for these new ATen ops will be added in a follow-up PR.
    
    [ghstack-poisoned]
    jbschlosser committed May 24, 2024
    Configuration menu
    Copy the full SHA
    de56815 View commit details
    Browse the repository at this point in the history

Commits on Jun 3, 2024

  1. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()`
    
    CPU impls for these new ATen ops will be added in a follow-up PR.
    
    [ghstack-poisoned]
    jbschlosser committed Jun 3, 2024
    Configuration menu
    Copy the full SHA
    1d4016c View commit details
    Browse the repository at this point in the history
  2. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()`
    
    CPU impls for these new ATen ops will be added in a follow-up PR.
    
    [ghstack-poisoned]
    jbschlosser committed Jun 3, 2024
    Configuration menu
    Copy the full SHA
    a2162fd View commit details
    Browse the repository at this point in the history
  3. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()`
    
    CPU impls for these new ATen ops will be added in a follow-up PR.
    
    [ghstack-poisoned]
    jbschlosser committed Jun 3, 2024
    Configuration menu
    Copy the full SHA
    710b5e2 View commit details
    Browse the repository at this point in the history
  4. Update on "Lift jagged -> padded dense forward / backward kernels fro…

    …m fbgemm_gpu"
    
    
    PyTorch can't depend on `fbgemm_gpu` as a dependency because `fbgemm_gpu` already has a dependency on PyTorch. So this PR copy / pastes kernels from `fbgemm_gpu`:
    * `dense_to_jagged_forward()` as CUDA registration for new ATen op `_padded_dense_to_jagged_forward()`
    * `jagged_to_padded_dense_forward()` as CUDA registration for new ATen op `_jagged_to_padded_dense_forward()`
    
    CPU impls for these new ATen ops will be added in a follow-up PR.
    
    [ghstack-poisoned]
    jbschlosser committed Jun 3, 2024
    Configuration menu
    Copy the full SHA
    96929ac View commit details
    Browse the repository at this point in the history