Skip to content

Conversation

@emlin
Copy link
Contributor

@emlin emlin commented Oct 18, 2025

Summary:
X-link: https://github.com/facebookresearch/FBGEMM/pull/2040

For embedding cache mode, we do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 18, 2025
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Oct 18, 2025

@emlin has exported this pull request. If you are a Meta employee, you can view the originating Diff in D84367061.

emlin added a commit to emlin/FBGEMM that referenced this pull request Oct 18, 2025
Summary:
X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
emlin added a commit to emlin/FBGEMM that referenced this pull request Oct 19, 2025
…h#5026)

Summary:

X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
emlin added a commit to emlin/FBGEMM that referenced this pull request Oct 19, 2025
…h#5026)

Summary:

X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
…ytorch#3466)

Summary:
X-link: pytorch/FBGEMM#5026


X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
emlin added a commit to emlin/FBGEMM that referenced this pull request Oct 19, 2025
…h#5026)

Summary:

X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
emlin added a commit to emlin/FBGEMM that referenced this pull request Oct 20, 2025
…h#5026)

Summary:

X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
meta-codesync bot pushed a commit to pytorch/FBGEMM that referenced this pull request Oct 20, 2025
Summary:
Pull Request resolved: #5026

X-link: meta-pytorch/torchrec#3466

X-link: https://github.com/facebookresearch/FBGEMM/pull/2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061

fbshipit-source-id: 83687bcb7c097f60b583c00bf80956efcdcd3a9d
@meta-codesync meta-codesync bot closed this in af25076 Oct 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant