Skip to content

TensorFlow Lite 2.8 ARM cross-compilation failed when XNNPACK=ON: unknown type name 'float16x8_t' #54337

@distlibs

Description

@distlibs

System information

  • Linux Ubuntu 20.04
  • TensorFlow 2.8
  • CMake 3.16.3
  • gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf (here)

Describe the problem

I trying to cross-compile TensorFlow Lite 2.8 with XNNPACK=ON for ARM using CMake. I got error "unknown type name 'float16x8_t'":

...
[ 60%] Building C object _deps/xnnpack-build/CMakeFiles/XNNPACK.dir/src/qc8-gemm/gen/2x8c2s4-minmax-fp32-neonv8-mlal.c.o
make[2]: Entering directory '/home/pi/tflite_build'
/home/pi/tflite_build/xnnpack/src/f16-f32-vcvt/gen/vcvt-neonfp16-x16.c: In function ‘xnn_f16_f32_vcvt_ukernel__neonfp16_x16’:
cd /home/pi/tflite_build/_deps/xnnpack-build && /home/pi/toolchains/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc -DCPUINFO_SUPPORTED_PLATFORM=1 -DEIGEN_MPL2_ONLY -DFXDIV_USE_INLINE_ASSEMBLY=0 -DNOMINMAX=1 -DPTHREADPOOL_NO_DEPRECATED_API=1 -DXNN_ENABLE_ASSEMBLY=1 -DXNN_ENABLE_MEMOPT=1 -DXNN_ENABLE_SPARSE=1 -DXNN_LOG_LEVEL=0 -I/home/pi/tflite_build/xnnpack/include -I/home/pi/tflite_build/xnnpack/src -I/home/pi/tflite_build/clog/deps/clog/include -I/home/pi/tflite_build/cpuinfo/include -I/home/pi/tflite_build/pthreadpool-source/include -I/home/pi/tflite_build/FXdiv-source/include -I/home/pi/tflite_build/FP16-source/include  -march=armv7-a -mfpu=neon-vfpv4 -funsafe-math-optimizations -O3 -DNDEBUG -fPIC   -Wno-psabi -pthread -std=gnu99  -marm  -march=armv8-a -mfpu=neon-fp-armv8  -O2  -o CMakeFiles/XNNPACK.dir/src/qc8-gemm/gen/2x8c2s4-minmax-fp32-neonv8-mlal.c.o   -c /home/pi/tflite_build/xnnpack/src/qc8-gemm/gen/2x8c2s4-minmax-fp32-neonv8-mlal.c
/home/pi/tflite_build/xnnpack/src/f16-f32-vcvt/gen/vcvt-neonfp16-x16.c:31:11: error: unknown type name ‘float16x8_t’
     const float16x8_t vh0 = vreinterpretq_f16_u16(vld1q_u16(i)); i += 8;
           ^~~~~~~~~~~
/home/pi/tflite_build/xnnpack/src/f16-f32-vcvt/gen/vcvt-neonfp16-x16.c:31:29: warning: implicit declaration of function ‘vreinterpretq_f16_u16’; did you mean ‘vreinterpretq_s16_u16’? [-Wimplicit-function-declaration]
     const float16x8_t vh0 = vreinterpretq_f16_u16(vld1q_u16(i)); i += 8;
                             ^~~~~~~~~~~~~~~~~~~~~
                             vreinterpretq_s16_u16
/home/pi/tflite_build/xnnpack/src/f16-f32-vcvt/gen/vcvt-neonfp16-x16.c:32:11: error: unknown type name ‘float16x8_t’
     const float16x8_t vh1 = vreinterpretq_f16_u16(vld1q_u16(i)); i += 8;
           ^~~~~~~~~~~

I able to compile TensorFlow Lite 2.7 with XNNPACK=ON for ARM using CMake.

I using build instructions provided here

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions