-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Also hipify the fp8 related cuda functions #2834
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
This pull request was exported from Phabricator. Differential Revision: D59665687 |
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Differential Revision: D59665687
60e0025
to
563837d
Compare
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
563837d
to
3453597
Compare
This pull request was exported from Phabricator. Differential Revision: D59665687 |
3453597
to
610a5a3
Compare
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
610a5a3
to
73520d3
Compare
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
73520d3
to
76dd993
Compare
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
76dd993
to
7b37877
Compare
This pull request was exported from Phabricator. Differential Revision: D59665687 |
7b37877
to
8f51df1
Compare
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
8f51df1
to
e9e1268
Compare
This pull request was exported from Phabricator. Differential Revision: D59665687 |
e9e1268
to
2bcb5b4
Compare
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
2bcb5b4
to
e796269
Compare
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
e796269
to
1115195
Compare
This pull request was exported from Phabricator. Differential Revision: D59665687 |
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
1115195
to
52628d7
Compare
Summary: Pull Request resolved: pytorch#2834 Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type. Reviewed By: jwfromm Differential Revision: D59665687
This pull request was exported from Phabricator. Differential Revision: D59665687 |
52628d7
to
c053657
Compare
This pull request has been merged in da410c0. |
Summary: Enable FP8 quantization routine build on AMD, since it now support hip_fp8 type.
Differential Revision: D59665687