Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Torch Compile with FP8 Quantization #2637

Closed
wants to merge 1 commit into from

Conversation

jwfromm
Copy link
Contributor

@jwfromm jwfromm commented May 28, 2024

Summary: Fixes a few incompatibilities between torch compile and fp8 quantization. Outputs look correct but we unfortunately still dont see any speed up. At least this allows further performance analysis.

Reviewed By: jianyuh

Differential Revision: D57885347

Summary: Fixes a few incompatibilities between torch compile and fp8 quantization. Outputs look correct but we unfortunately still dont see any speed up. At least this allows further performance analysis.

Reviewed By: jianyuh

Differential Revision: D57885347
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57885347

Copy link

netlify bot commented May 28, 2024

Deploy Preview for pytorch-fbgemm-docs failed.

Name Link
🔨 Latest commit a05b6df
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/6656695ff68924000835ca2d

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 35fa7be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants