-
Notifications
You must be signed in to change notification settings - Fork 433
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add permute_duplicate_pooled_embeddings op #1912
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs canceled.
|
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: a398bc495e3461abc429974dc6d2212e0178e59d
8f33247
to
1a13a72
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: 99a9d88a2f1ada30dd41c24b4df97745c5e108d8
1a13a72
to
29835eb
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: 9ca8cfa7b066c84d20e3618b073bcd66627c32ee
29835eb
to
ed253e2
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: 03f3792cc2309e5dae5af4532887e29ce0e07cd4
ed253e2
to
a1132db
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: e66734ca9e6e105dbe0031025239ae585063c255
a1132db
to
a9dd610
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: 922e8c472c3ffb42f23c36dd38c67a3dffa8e119
a9dd610
to
5fb9fd4
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: 96edf038b8c991b0331e05a0ec5a84d9e35bcf91
5fb9fd4
to
c592d35
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: f8dd2cce92b768c2c1269d071d32fa795e6bf29e
c592d35
to
1751abe
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: 87fbb41662af0c8bb6cba0b6ecc8aa6f06b3d30c
52be70d
to
d077c44
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Differential Revision: D48090591 fbshipit-source-id: daa9b037786e44123afad875b05ab68792fbd2ab
d077c44
to
7f8305b
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Reviewed By: sryap Differential Revision: D48090591 fbshipit-source-id: e649a33d6b5ff70733e37bd96e9419328c3c0624
7f8305b
to
276fefa
Compare
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Reviewed By: sryap Differential Revision: D48090591 fbshipit-source-id: e12c85ab1ce9066a8cc4eb7bd29759d5c4a5be6c
276fefa
to
34a7fba
Compare
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Reviewed By: sryap Differential Revision: D48090591 fbshipit-source-id: d484cd779edb37f9c3a4bb0af1725aa7960c03cd
34a7fba
to
a9ada05
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Reviewed By: sryap Differential Revision: D48090591 fbshipit-source-id: 083b519187afdb7ae9a75be68c7e5725b6968194
a9ada05
to
6d2ce9e
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Reviewed By: sryap Differential Revision: D48090591 fbshipit-source-id: bb202c905728d01171a642ae0722d0aa97f25af7
6d2ce9e
to
40b569e
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Reviewed By: sryap Differential Revision: D48090591 fbshipit-source-id: 4096408033988ab17e0ca3c76a661e4bce6d8fde
40b569e
to
66cb344
Compare
This pull request was exported from Phabricator. Differential Revision: D48090591 |
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Reviewed By: sryap Differential Revision: D48090591 fbshipit-source-id: ebb5a15ae0a5cf9b0b03b6b5be78360838e05849
66cb344
to
11a8d49
Compare
Summary: Pull Request resolved: pytorch#1912 # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] # Backwards Compatibility The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu. # Documentation [Link](https://docs.google.com/document/d/1rp31nhcRuVmh8RVbWcktL3Hq-0ccc84AYL4nVegDywI) Reviewed By: sryap Differential Revision: D48090591 fbshipit-source-id: 54fce7365a7608a175b0eb0de9484a829fe2cd34
This pull request was exported from Phabricator. Differential Revision: D48090591 |
11a8d49
to
ab8791d
Compare
This pull request has been merged in 9d221b7. |
Summary:
Background
Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation.
Details
The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list.
Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Offset_dims: [0, 2, 5, 6, 10]
Permute: [3, 0, 2, 1, 3]
Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9]
Backwards Compatibility
The permute_duplicate_pooled_embs_gpu is backwards compatible with the permute_pooled_embs_gpu (non duplicate permutations) use case, hence any use of permute_pooled_embs_gpu in the inference path can be replaced with permute_duplicate_pooled_embs_gpu.
Documentation
Link
Differential Revision: D48090591