Skip to content
High-efficiency floating-point neural network inference operators for mobile, server, and Web
C C++ Assembly Python Other
Branch: master
Clone or download
Latest commit d6ebf0c Jan 16, 2020
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
bench F32 Sigmoid micro-kernels in AVX2 implementation Jan 7, 2020
cmake Apply patch to cpuinfo Jan 2, 2020
eval
include ND Divide operator with broadcasting support Dec 6, 2019
models Randomized end-to-end MobileNet v3 benchmark Dec 9, 2019
scripts F32 Sigmoid micro-kernels in AVX2 implementation Jan 7, 2020
src 6x8 a53 use X8 for GPR shadow register. Eliminate GPR push/pop Jan 16, 2020
test F32 Sigmoid micro-kernels in AVX2 implementation Jan 7, 2020
third_party Apply patch to cpuinfo Jan 2, 2020
tools Scalar F32 Sigmoid micro-kernels Dec 23, 2019
.bazelrc Bazel BUILD file for XNNPACK Oct 3, 2019
.gitignore Fix Bazel symblinks in .gitignore Oct 9, 2019
BUILD.bazel F32 Sigmoid micro-kernels in AVX2 implementation Jan 7, 2020
CMakeLists.txt F32 Sigmoid micro-kernels in AVX2 implementation Jan 7, 2020
CONTRIBUTING.md Initial open-source release Sep 28, 2019
LICENSE Initial open-source release Sep 28, 2019
README.md Add link to the two-pass softmax paper in README Jan 15, 2020
WORKSPACE Apply patch to cpuinfo Jan 2, 2020
build_defs.bzl Hide all emscripten-specific sources behind xnnpack_cc_library rule Oct 7, 2019
emscripten.bzl
preamble.js.lds Initial open-source release Sep 28, 2019

README.md

XNNPACK

XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.

Supported Architectures

  • ARM64 on Android and Linux
  • ARMv7 (with NEON) on Android and Linux
  • WebAssembly MVP
  • WebAssembly SIMD (experimental)
  • x86 and x86-64 (up to AVX512) on Android, Linux, and macOS

Operator Coverage

XNNPACK implements the following neural network operators:

  • 2D Convolution (including grouped and depthwise)
  • 2D Deconvolution (AKA Transposed Convolution)
  • 2D Average Pooling
  • 2D Max Pooling
  • 2D ArgMax Pooling (Max Pooling + indices)
  • 2D Unpooling
  • 2D Bilinear Resize
  • Add (including broadcasting, two inputs only)
  • Subtract (including broadcasting)
  • Divide (including broadcasting)
  • Maximum (including broadcasting)
  • Minimum (including broadcasting)
  • Multiply (including broadcasting)
  • Global Average Pooling
  • Channel Shuffle
  • Fully Connected
  • Clamp (includes ReLU and ReLU6)
  • HardSwish
  • Sigmoid
  • PReLU

All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.

Performance

Mobile phones

The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

Model Pixel, ms Pixel 2, ms Pixel 3a, ms
MobileNet v1 1.0X 81 89 88
MobileNet v2 1.0X 48 55 54
MobileNet v3 Large 40 44 44
MobileNet v3 Small 12 14 14

The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

Model Pixel, ms Pixel 2, ms Pixel 3a, ms
MobileNet v1 1.0X 45 27 46
MobileNet v2 1.0X 28 18 28
MobileNet v3 Large 23 16 24
MobileNet v3 Small 7 6 8

Benchmarked on January 9, 2020 with end2end_bench --benchmark_min_time=5 on an Android/ARM64 build (bazel build -c opt --config android_arm64 :end2end_bench) and neural network models with randomized weights and inputs.

Raspberry Pi

The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.

Model RPi 2 (BCM2836), ms RPi 3+ (BCM2837B0), ms RPi 4 (BCM2711), ms
MobileNet v1 1.0X 380 115 76
MobileNet v2 1.0X 217 80 45
MobileNet v3 Large 180 67 41
MobileNet v3 Small 57 23 15

Benchmarked on January 9, 2020 with end2end-bench --benchmark_min_time=5 on a Raspbian Buster build with CMake (./scripts/build-local.sh) and neural network models with randomized weights and inputs.

Publications

Acknowledgements

XNNPACK is a based on QNNPACK library. Unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.

You can’t perform that action at this time.