Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Quant] make x86 the default quantization backend (qengine) #91235

Closed

Conversation

Xia-Weiwen
Copy link
Collaborator

@Xia-Weiwen Xia-Weiwen commented Dec 21, 2022

Summary
Make x86 the default quantization backend (qengine) for X86 CPU platforms.
X86 is a unified quantization backend combining goodness of fbgemm and onednn. For more details please see #83888

Test plan
python test/test_quantization.py

cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @leslie-fang-intel @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 21, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91235

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 1bdd152:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@Xia-Weiwen Xia-Weiwen added oncall: quantization Quantization support in PyTorch ciflow/trunk Trigger trunk jobs on your pull request intel This tag is for PR from Intel labels Dec 21, 2022
@XiaobingSuper XiaobingSuper marked this pull request as ready for review January 3, 2023 08:58
@atalman atalman added this to the 2.0.0 milestone Jan 11, 2023
Copy link
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR description does not match with the actual implementation - i.e. it makes x86 a default backend for all platforms (not just x86, but ARM as well)

If this quantization backend supposed to work on all CPUs, let's rename it to something else.
If not, please guard the check with #if defined(__x86_64__) || defined(WHATEVER_IT_SHOULD_BE_FOR_WINDOWS)

@Xia-Weiwen
Copy link
Collaborator Author

PR description does not match with the actual implementation - i.e. it makes x86 a default backend for all platforms (not just x86, but ARM as well)

If this quantization backend supposed to work on all CPUs, let's rename it to something else. If not, please guard the check with #if defined(__x86_64__) || defined(WHATEVER_IT_SHOULD_BE_FOR_WINDOWS)

Thanks @malfet. X86 backend is only enabled when fbgemm is enabled. Looks like fbgemm only supports x86_64 platforms, right? In that case, x86 backend is enabled on x86_64 platforms only, and additional guard is not needed.

@malfet
Copy link
Contributor

malfet commented Jan 12, 2023

Thanks @malfet. X86 backend is only enabled when fbgemm is enabled. Looks like fbgemm only supports x86_64 platforms, right? In that case, x86 backend is enabled on x86_64 platforms only, and additional guard is not needed.

Good point, looks good to me then.

@malfet
Copy link
Contributor

malfet commented Jan 12, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request intel This tag is for PR from Intel Merged oncall: quantization Quantization support in PyTorch open source
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

None yet

7 participants