Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quantization: make x86 as default backend (part 1) #88799

Closed

Conversation

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 10, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/88799

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 4c6c326:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions github-actions bot added the oncall: quantization Quantization support in PyTorch label Nov 10, 2022
XiaobingSuper added a commit that referenced this pull request Nov 10, 2022
ghstack-source-id: 19bf73976b8339edd9ac659e6943b29abfc17410
Pull Request resolved: #88799
Comment on lines -335 to 337
// The X86 qengine is available if and only if FBGEMM is available
engines.push_back(at::kX86);
// The X86 qengine is available if and only if FBGEMM is available
engines.push_back(at::kFBGEMM);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order is not important now?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe, I just revert @jerryzh168 's PR changes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the order is still important, whatever comes the last is the default, and we want to fix it in a separate PR

@XiaobingSuper XiaobingSuper added the intel priority matters to intel architecture from performance wise label Nov 15, 2022
@kit1980
Copy link
Member

kit1980 commented Nov 16, 2022

@XiaobingSuper feel free to merge when ready

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 Xia-Weiwen leslie-fang-intel

[ghstack-poisoned]
XiaobingSuper added a commit that referenced this pull request Nov 21, 2022
ghstack-source-id: 79ef0f2d4b935bed48cb5b3c78c16751c4d8e826
Pull Request resolved: #88799
cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 Xia-Weiwen leslie-fang-intel

[ghstack-poisoned]
XiaobingSuper added a commit that referenced this pull request Nov 22, 2022
ghstack-source-id: 401edd8d74f126a372abb910671c8ebaf1eec3c5
Pull Request resolved: #88799
cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 Xia-Weiwen leslie-fang-intel

[ghstack-poisoned]
XiaobingSuper added a commit that referenced this pull request Nov 29, 2022
ghstack-source-id: 18811cac5104f788cef4183322c2987926cf38b1
Pull Request resolved: #88799
@XiaobingSuper XiaobingSuper added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 29, 2022
@XiaobingSuper
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@@ -332,8 +332,8 @@ const std::vector<at::QEngine>& Context::supportedQEngines() {

#ifdef USE_FBGEMM
if (fbgemm::fbgemmSupportedCPU()) {
// The X86 qengine is available if and only if FBGEMM is available
engines.push_back(at::kX86);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually this PR is not making x86 the default quantized engine yet, I believe with the current code you'll need to switch the order, but I have #89804, and maybe you can change the default more explicitly after my PR is landed

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this PR was already landed, I changed the PR title to avoid misunderstanding. @XiaobingSuper Could you please follow up the changes after #89804 is landed? Thanks!

@jgong5 jgong5 changed the title quantization: make x86 as default backend quantization: make x86 as default backend (part 1) Dec 1, 2022
kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Dec 10, 2022
@facebook-github-bot facebook-github-bot deleted the gh/XiaobingSuper/31/head branch June 8, 2023 15:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request intel priority matters to intel architecture from performance wise Merged oncall: quantization Quantization support in PyTorch open source release notes: AO frontend
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

None yet

6 participants