-
Notifications
You must be signed in to change notification settings - Fork 538
Swap IntNBit TBE Kernel with SSD Embedding DB TBE Kernel for SSD Infernece Enablement #3134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pull request was exported from Phabricator. Differential Revision: D76953960 |
63aadbb
to
76fce37
Compare
…rnece Enablement (pytorch#3134) Summary: For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base. Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly. Differential Revision: D76953960
This pull request was exported from Phabricator. Differential Revision: D76953960 |
…rnece Enablement (pytorch#3134) Summary: For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base. Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly. Differential Revision: D76953960
76fce37
to
bf3b57a
Compare
This pull request was exported from Phabricator. Differential Revision: D76953960 |
@@ -224,6 +224,7 @@ def __init__( | |||
self._is_weighted: bool = module.is_weighted() | |||
self._lookups: List[nn.Module] = [] | |||
self._create_lookups(fused_params, device) | |||
self._fused_params = fused_params |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self._fused_params = fused_params | |
self.fused_params = fused_params |
looks like this attr is used externally, remove the _
…rnece Enablement (pytorch#3134) Summary: For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base. Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly. Differential Revision: D76953960
bf3b57a
to
dfe6abf
Compare
This pull request was exported from Phabricator. Differential Revision: D76953960 |
…rnece Enablement (pytorch#3134) Summary: Pull Request resolved: pytorch#3134 For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base. Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly. Differential Revision: D76953960
dfe6abf
to
415e11e
Compare
…rnece Enablement (pytorch#3134) Summary: For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base. Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly. Reviewed By: gyllstromk Differential Revision: D76953960
415e11e
to
68a5c89
Compare
…rnece Enablement (pytorch#3134) Summary: Pull Request resolved: pytorch#3134 For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base. Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly. Reviewed By: gyllstromk Differential Revision: D76953960
This pull request was exported from Phabricator. Differential Revision: D76953960 |
68a5c89
to
1dcf8b2
Compare
Summary:
For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base.
Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly.
Differential Revision: D76953960