-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[aarch64] Fix aarch64 build so that quantize_val_arm is defined #84564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful links
✅ No Failures (1 Pending)As of commit ca4696f (more details on the Dr. CI page): Expand to see moreCommit ca4696f was recently pushed. Waiting for builds... This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
This pull request was exported from Phabricator. Differential Revision: D39272746 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this qualification needed?
|
Otherwise in aarch64 builds quantize_val_arm doesn't get defined because it
isn't in the fbgemm case. If there's a better way to deal with this I'm all
for it.
…On Tue, Sep 6, 2022 at 8:00 AM Kimish Patel ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In aten/src/ATen/native/quantized/AffineQuantizerBase.cpp
<#84564 (comment)>:
> @@ -33,7 +33,7 @@ void checkZeroPoint(const std::string& fn_name, int64_t zero_point) {
} // anonymous namespace
-#ifdef USE_FBGEMM
+#if defined(USE_FBGEMM) && !defined(__aarch64__)
Why is this qualification needed?
—
Reply to this email directly, view it on GitHub
<#84564 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABQCUVF2GII66J2NS3NU3TV45L7XANCNFSM6AAAAAAQFZR4GI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I dont follow. quantize_val_arm is defined "if you are not USE_FBGEMM". Your changes say define quantize_val_arm "if you are not USE_FBGEMM OR you are aarch64". But if you are building for aarch64 you shouldnt really have USE_FBGEMM=1. SO not sure why the build would fail. quantize_val_arm is actually used here https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L3373. So if you want to use appropriate macros to define it, a better thing to do would be to refactor that such that quantize_val_arm uses this, https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L3274. I do agree though that this part of the code can be refactored for better readability. It is a bit of a mess. |
|
I took the easy way out to get things working. I'll refactor. Thx
…On Tue, Sep 6, 2022 at 8:33 AM Kimish Patel ***@***.***> wrote:
quantize_val_arm
I dont follow. quantize_val_arm is defined "if you are not USE_FBGEMM".
Your changes say define quantize_val_arm "if you are not USE_FBGEMM OR you
are aarch64". But if you are building for aarch64 you shouldnt really have
USE_FBGEMM=1. SO not sure why the build would fail.
quantize_val_arm is actually used here
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L3373.
So if you want to use appropriate macros to define it, a better thing to do
would be to refactor that such that quantize_val_arm uses this,
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L3274
.
I do agree though that this part of the code can be refactored for better
readability. It is a bit of a mess.
—
Reply to this email directly, view it on GitHub
<#84564 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABQCUVJQJTQFBAK2EK3SF3V45P27ANCNFSM6AAAAAAQFZR4GI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Sure but were you building for aarch64 with |
|
Well, we're trying to actually use simde to wrap a lot of the AVX/SSE code for aarch64 vs do a direct aarch64 port. |
|
I guess ideally we would have QNNPACK for arm on mobile side for PyTorch, but @psaab is also exploring the option of using arm for server side (so checking if FBGEMM is working with simde conversion in FBGEMM PR like pytorch/FBGEMM#1271 ). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks ok for now. Do leave a comment as to a) why do this and b) maybe a better refactor might be to get _arm variants into arm specific macros.
|
This pull request was exported from Phabricator. Differential Revision: D39272746 |
6c17574 to
577d59b
Compare
|
@pytorchbot merge -g |
|
@pytorchbot successfully started a merge job. Check the current status here. |
Merge failedReason: The following mandatory check(s) failed (Rule Dig deeper by viewing the failures on hud Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -g |
…rch#84564) Summary: Pull Request resolved: pytorch#84564 quantize_val_arm is used in the kernels when building under aarch64 Test Plan: CI Reviewed By: kimishpatel, pallab-zz Differential Revision: D39272746 fbshipit-source-id: 611a9a7b7f89ca268cd62a9e72db3f4f4c435fb9
|
@pytorchbot successfully started a merge job. Check the current status here. |
Merge failedReason: The following mandatory check(s) failed (Rule Dig deeper by viewing the failures on hud Details for Dev Infra teamRaised by workflow job |
|
This pull request was exported from Phabricator. Differential Revision: D39272746 |
577d59b to
ca4696f
Compare
|
@pytorchbot merge -g |
|
@pytorchbot successfully started a merge job. Check the current status here. |
|
Hey @psaab. |
…) (#84564) Summary: quantize_val_arm is used in the kernels when building under aarch64 Pull Request resolved: #84564 Approved by: https://github.com/kimishpatel Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/6bedb7a75e2c6712ef3a8de3283fe44adab4a659 Test plan from GitHub: CI Original Phabricator Test Plan: CI Reviewed By: kimishpatel, pallab-zz Differential Revision: D39272746 fbshipit-source-id: a46dfacc50e34cabefd607be0059b8474017b330
Summary: quantize_val_arm is used in the kernels when building under aarch64
Test Plan: CI
Differential Revision: D39272746