Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare transpose_embedding_input for VLE #1634

Closed
wants to merge 1 commit into from

Conversation

sryap
Copy link
Contributor

@sryap sryap commented Mar 9, 2023

Summary:
Prepare transpose_embedding_input for variable length TBE (VLE).

  • Update the frontend API to accept VLE args
  • Change linearize_index_kernel to generate new info for pooled
    TBE backward. Bag ID (b) and table ID (t) are stored in a 32-bit
    info variable.
    • We use lower info_B_num_bits bits to store b (b < max_B).
      Supported max_B = 2^info_B_num_bits
    • We use upper 32 - info_B_num_bits bits to store t (t <
      T). Supported T = 2^(32 - info_B_num_bits)
    • Although this change is mainly for avoiding binary search in the
      backward pass, the change is applied to the non-VLE cases too for
      easy code maintenace.
  • Update all backward kernels (including Triton's) to process the new
    info

Differential Revision: D43256880

@netlify
Copy link

netlify bot commented Mar 9, 2023

Deploy Preview for pytorch-fbgemm-docs canceled.

Name Link
🔨 Latest commit ceecab6
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/64594860d68d9900076aaaf9

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 14, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable length TBE (VLE).

- Update the frontend API to accept VLE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VLE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Differential Revision: D43256880

fbshipit-source-id: c058763b5b02a7e60dab0d5287a03f013c29dd31
sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 14, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable length TBE (VLE).

- Update the frontend API to accept VLE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VLE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Differential Revision: D43256880

fbshipit-source-id: ac0d62f07401273f6ff1bad68a9b63797436a147
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 21, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable length TBE (VLE).

- Update the frontend API to accept VLE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VLE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 2fea1ea8509308ba95018eb2b340702568ccea5b
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 21, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable length TBE (VLE).

- Update the frontend API to accept VLE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VLE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: b439643214b05b79bb4cbb9ddca7cc07dfd9e3ed
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 27, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable length TBE (VLE).

- Update the frontend API to accept VLE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VLE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 7028199aff979b21b8b2ed159df73bf4c206ef2e
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 27, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable length TBE (VLE).

- Update the frontend API to accept VLE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VLE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 85f0cc67600cb0c671d729afbe7d4dd882f14762
sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 27, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable length TBE (VLE).

- Update the frontend API to accept VLE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VLE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: aba1f1228b7f49c86b541341ef0bd1cc479e46ed
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 27, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable length TBE (VLE).

- Update the frontend API to accept VLE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VLE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 27875c98eef34fe7f7f38337e236f4a73398ad7e
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request Mar 29, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 68eb9cd9f11b087c08a665149a24b8afda6a5270
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request May 2, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 91b3cc0211d51154965cb6223fbbde5620ab0661
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request May 2, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: c445248a3cdb9a507a20cf4462a1566344f1469a
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request May 3, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: c71e65e43aa0ba195f25e2ed2fa1d45ce1ef74e1
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request May 3, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 48285ebaf48aa72173c651d37ab115f2da6f3290
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request May 5, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 4c19e649d5021627386dbccd0727e5de6967a684
sryap added a commit to sryap/FBGEMM that referenced this pull request May 5, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 3b15b534a18cc8a1953d3c0e8e65bbea364d5b99
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

sryap added a commit to sryap/FBGEMM that referenced this pull request May 8, 2023
Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 8d29a3181dcc5f864c064c955c1a37fe1580c744
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

Summary:
Pull Request resolved: pytorch#1634

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Change `linearize_index_kernel` to generate new `info` for pooled
  TBE backward. Bag ID (`b`) and table ID (`t`) are stored in a 32-bit
  `info` variable.
  - We use lower `info_B_num_bits` bits to store `b` (`b` < `max_B`).
    Supported `max_B` = `2^info_B_num_bits`
  - We use upper `32 - info_B_num_bits` bits to store `t` (`t` <
    `T`).  Supported `T` = `2^(32 - info_B_num_bits)`
  - Although this change is mainly for avoiding binary search in the
  backward pass, the change is applied to the non-VBE cases too for
  easy code maintenace.
- Update all backward kernels (including Triton's) to process the new
  `info`

Reviewed By: jianyuh

Differential Revision: D43256880

fbshipit-source-id: 4ec4ef7e6c922e5b6c13ed735d9264851e79419c
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43256880

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in f4c83b4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants