-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quantization: make x86 as default backend (part 1) #88799
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/88799
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 4c6c326: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: 19bf73976b8339edd9ac659e6943b29abfc17410 Pull Request resolved: #88799
// The X86 qengine is available if and only if FBGEMM is available | ||
engines.push_back(at::kX86); | ||
// The X86 qengine is available if and only if FBGEMM is available | ||
engines.push_back(at::kFBGEMM); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order is not important now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe, I just revert @jerryzh168 's PR changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the order is still important, whatever comes the last is the default, and we want to fix it in a separate PR
@XiaobingSuper feel free to merge when ready |
cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 Xia-Weiwen leslie-fang-intel [ghstack-poisoned]
ghstack-source-id: 79ef0f2d4b935bed48cb5b3c78c16751c4d8e826 Pull Request resolved: #88799
cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 Xia-Weiwen leslie-fang-intel [ghstack-poisoned]
ghstack-source-id: 401edd8d74f126a372abb910671c8ebaf1eec3c5 Pull Request resolved: #88799
cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo jgong5 Xia-Weiwen leslie-fang-intel [ghstack-poisoned]
ghstack-source-id: 18811cac5104f788cef4183322c2987926cf38b1 Pull Request resolved: #88799
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@@ -332,8 +332,8 @@ const std::vector<at::QEngine>& Context::supportedQEngines() { | |||
|
|||
#ifdef USE_FBGEMM | |||
if (fbgemm::fbgemmSupportedCPU()) { | |||
// The X86 qengine is available if and only if FBGEMM is available | |||
engines.push_back(at::kX86); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually this PR is not making x86 the default quantized engine yet, I believe with the current code you'll need to switch the order, but I have #89804, and maybe you can change the default more explicitly after my PR is landed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this PR was already landed, I changed the PR title to avoid misunderstanding. @XiaobingSuper Could you please follow up the changes after #89804 is landed? Thanks!
Pull Request resolved: pytorch#88799 Approved by: https://github.com/kit1980
Stack from ghstack (oldest at bottom):
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel