Skip to content

Conversation

@emlin
Copy link
Contributor

@emlin emlin commented Oct 18, 2025

Summary:
X-link: meta-pytorch/torchrec#3466

X-link: https://github.com/facebookresearch/FBGEMM/pull/2040

For embedding cache mode, we do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061

@netlify
Copy link

netlify bot commented Oct 18, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 0830111
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/68f56c68a4ba800008a97bde
😎 Deploy Preview https://deploy-preview-5026--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@meta-codesync
Copy link
Contributor

meta-codesync bot commented Oct 18, 2025

@emlin has exported this pull request. If you are a Meta employee, you can view the originating Diff in D84367061.

@meta-cla meta-cla bot added the cla signed label Oct 18, 2025
emlin added a commit to emlin/FBGEMM that referenced this pull request Oct 19, 2025
…h#5026)

Summary:

X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
emlin added a commit to emlin/FBGEMM that referenced this pull request Oct 19, 2025
…h#5026)

Summary:

X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
emlin added a commit to emlin/torchrec that referenced this pull request Oct 19, 2025
…ytorch#3466)

Summary:
X-link: pytorch/FBGEMM#5026


X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
…h#5026)

Summary:

X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
emlin added a commit to emlin/FBGEMM that referenced this pull request Oct 20, 2025
…h#5026)

Summary:

X-link: meta-pytorch/torchrec#3466

X-link: facebookresearch/FBGEMM#2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061
@meta-codesync meta-codesync bot closed this in 98ee828 Oct 20, 2025
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Oct 20, 2025

This pull request has been merged in 98ee828.

meta-codesync bot pushed a commit to meta-pytorch/torchrec that referenced this pull request Oct 20, 2025
Summary:
X-link: pytorch/FBGEMM#5026

Pull Request resolved: #3466

X-link: https://github.com/facebookresearch/FBGEMM/pull/2040

For embedding cache mode, we  do not expect random value if there is cache missing.
This diff passed the embedding cache mode to inference operator, and use that to disable the backend random initialization.

Differential Revision: D84367061

fbshipit-source-id: 83687bcb7c097f60b583c00bf80956efcdcd3a9d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants