-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Fix vec128_half_neon.h compilation with GCC
#139235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
`mask` is already `uint16x8_t` no need to reinterpret it
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/139235
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 6d9c71a with merge base bd369bb ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot merge -f "Lint + relevant builds have passed" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
`mask` is already defined as `uint16x8_t` no need to reinterpret it https://github.com/pytorch/pytorch/blob/bd369bb18258fc3be5ee91f8fcaf06a4b6fc41a7/aten/src/ATen/cpu/vec/vec128/vec128_half_neon.h#L220 Fixes ``` var/lib/jenkins/workspace/aten/src/ATen/cpu/vec/vec128/vec128_half_neon.h: In static member function 'static at::vec::DEFAULT::Vectorized<c10::Half> at::vec::DEFAULT::Vectorized<c10::Half>::set(const at::vec::DEFAULT::Vectorized<c10::Half>&, const at::vec::DEFAULT::Vectorized<c10::Half>&, int64_t)': /var/lib/jenkins/workspace/aten/src/ATen/cpu/vec/vec128/vec128_half_neon.h:227:39: error: cannot convert 'uint16x8_t' to 'float16x8_t' 227 | vreinterpretq_u16_f16(mask), | ^~~~ | | | uint16x8_t In file included from /var/lib/jenkins/workspace/aten/src/ATen/cpu/vec/intrinsics.h:23, from /var/lib/jenkins/workspace/aten/src/ATen/cpu/vec/vec128/vec128.h:4, from /var/lib/jenkins/workspace/aten/src/ATen/cpu/vec/vec.h:6, from /var/lib/jenkins/workspace/aten/src/ATen/test/vec_test_all_types.h:2, from /var/lib/jenkins/workspace/aten/src/ATen/test/vec_test_all_types.cpp:1: /usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h:5841:36: note: initializing argument 1 of 'uint16x8_t vreinterpretq_u16_f16(float16x8_t)' 5841 | vreinterpretq_u16_f16 (float16x8_t __a) | ~~~~~~~~~~~~^~~ ``` introduced by pytorch#137911 Also, guard any use of NEON intrinsics in `ReducedPrecisionFloatGemvFastPathKernel.cpp` with `!defined(CPU_CAPABILITY_SVE)` otherwise compilation fails with ``` /var/lib/jenkins/workspace/aten/src/ATen/native/cpu/ReducedPrecisionFloatGemvFastPathKernel.cpp: In function 'float at::native::SVE256::reduce(at::vec::SVE256::VectorizedN<c10::Half, 16>&)': /var/lib/jenkins/workspace/aten/src/ATen/native/cpu/ReducedPrecisionFloatGemvFastPathKernel.cpp:77:24: error: cannot convert 'at::vec::SVE256::Vectorized<float>' to 'float32x4_t' 77 | return vaddvq_f32(t0 + t1); | ~~~^~~~ | | | at::vec::SVE256::Vectorized<float> In file included from /var/lib/jenkins/workspace/c10/util/Half.h:51, from /var/lib/jenkins/workspace/c10/util/Float8_e5m2.h:17, from /var/lib/jenkins/workspace/c10/core/ScalarType.h:8, from /var/lib/jenkins/workspace/c10/core/TensorImpl.h:11, from /var/lib/jenkins/workspace/c10/core/GeneratorImpl.h:8, from /var/lib/jenkins/workspace/aten/src/ATen/core/Generator.h:18, from /var/lib/jenkins/workspace/aten/src/ATen/CPUGeneratorImpl.h:3, from /var/lib/jenkins/workspace/aten/src/ATen/Context.h:4, from /var/lib/jenkins/workspace/aten/src/ATen/native/cpu/ReducedPrecisionFloatGemvFastPathKernel.cpp:2, from /var/lib/jenkins/workspace/build/aten/src/ATen/native/cpu/ReducedPrecisionFloatGemvFastPathKernel.cpp.SVE256.cpp:1: /usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h:10423:25: note: initializing argument 1 of 'float32_t vaddvq_f32(float32x4_t)' 10423 | vaddvq_f32 (float32x4_t __a) | ~~~~~~~~~~~~^~~ In file included from /var/lib/jenkins/workspace/build/aten/src/ATen/native/cpu/ReducedPrecisionFloatGemvFastPathKernel.cpp.SVE256.cpp:1: /var/lib/jenkins/workspace/aten/src/ATen/native/cpu/ReducedPrecisionFloatGemvFastPathKernel.cpp: In function 'float at::native::SVE256::reduce(at::vec::SVE256::Vectorized<float>)': /var/lib/jenkins/workspace/aten/src/ATen/native/cpu/ReducedPrecisionFloatGemvFastPathKernel.cpp:119:21: error: cannot convert 'at::vec::SVE256::Vectorized<float>' to 'float32x4_t' 119 | return vaddvq_f32(x); | ^ | | | at::vec::SVE256::Vectorized<float> ``` Pull Request resolved: pytorch#139235 Approved by: https://github.com/huydhn
maskis already defined asuint16x8_tno need to reinterpret itpytorch/aten/src/ATen/cpu/vec/vec128/vec128_half_neon.h
Line 220 in bd369bb
Fixes
introduced by #137911
Also, guard any use of NEON intrinsics in
ReducedPrecisionFloatGemvFastPathKernel.cppwith!defined(CPU_CAPABILITY_SVE)otherwise compilation fails withcc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10