Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support mixed weight precision setup for SSD TBE #1517

Closed
wants to merge 1 commit into from

Conversation

jianyuh
Copy link
Member

@jianyuh jianyuh commented Dec 18, 2022

Summary: Support the mixed weight precision (e.g., INT8 + INT2 + FP16).

Differential Revision: D41717627

@netlify
Copy link

netlify bot commented Dec 18, 2022

Deploy Preview for pytorch-fbgemm-docs canceled.

Name Link
🔨 Latest commit c1172ce
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/63a019c58d73500008bee294

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D41717627

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D41717627

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 18, 2022
Summary:
Pull Request resolved: pytorch#1517

Support the mixed weight precision (e.g., INT8 + INT2 + FP16).

Differential Revision: D41717627

fbshipit-source-id: 180cb9a810e86fd9e46df0c38df659b4ea83b08f
jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 19, 2022
Summary:
Pull Request resolved: pytorch#1517

Support the mixed weight precision (e.g., INT8 + INT2 + FP16).

Reviewed By: jspark1105

Differential Revision: D41717627

fbshipit-source-id: dc9c1b223e9089b3d9f45dbf5e252feb3d9762e5
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D41717627

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 19, 2022
Summary:
Pull Request resolved: pytorch#1517

Support the mixed weight precision (e.g., INT8 + INT2 + FP16).

Reviewed By: jspark1105

Differential Revision: D41717627

fbshipit-source-id: e0b75714527e00a87d7105d5e203f02510559ca0
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D41717627

Summary:
Pull Request resolved: pytorch#1517

Support the mixed weight precision (e.g., INT8 + INT2 + FP16).

Reviewed By: jspark1105

Differential Revision: D41717627

fbshipit-source-id: 09805996be515f0aa521a99761731d4954496c73
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D41717627

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in c19cba8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants