Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix AMD FP8 Test and use native rowwise quantization in benchmark #2849

Closed
wants to merge 1 commit into from

Conversation

jwfromm
Copy link
Contributor

@jwfromm jwfromm commented Jul 15, 2024

Summary:
Fix a minor test issue where triton blockwise quantization was running on AMD despite not being supported.

I also switch rowwise quantization in our fp8 benchmarks to the native hip implementation.

Reviewed By: jianyuh

Differential Revision: D59771162

Copy link

netlify bot commented Jul 15, 2024

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 75ff00a
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/66969f4e3113b40008751659
😎 Deploy Preview https://deploy-preview-2849--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59771162

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59771162

jwfromm added a commit to jwfromm/FBGEMM that referenced this pull request Jul 16, 2024
…torch#2849)

Summary:
Pull Request resolved: pytorch#2849

Fix a minor test issue where triton blockwise quantization was running on AMD despite not being supported.

I also switch rowwise quantization in our fp8 benchmarks to the native hip implementation.

Reviewed By: jianyuh

Differential Revision: D59771162
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59771162

jwfromm added a commit to jwfromm/FBGEMM that referenced this pull request Jul 16, 2024
…torch#2849)

Summary:
Pull Request resolved: pytorch#2849

Fix a minor test issue where triton blockwise quantization was running on AMD despite not being supported.

I also switch rowwise quantization in our fp8 benchmarks to the native hip implementation.

Reviewed By: jianyuh

Differential Revision: D59771162
…torch#2849)

Summary:
Pull Request resolved: pytorch#2849

Fix a minor test issue where triton blockwise quantization was running on AMD despite not being supported.

I also switch rowwise quantization in our fp8 benchmarks to the native hip implementation.

Reviewed By: jianyuh

Differential Revision: D59771162
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59771162

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 57a5969.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants