Skip to content

A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.

Notifications You must be signed in to change notification settings

zhangkom/CNN-Inference-Engine-Quick-View

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 

Repository files navigation

CNN-Inference-Engine-Quick-View

A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.

Runtime-speed Comparisons

FLOAT32-Support

Framework Main Platform Model Compatibility Detection-Support Speed Benchmarks
Intel-Caffe CPU (Intel optimized) Caffe Y Link
NCNN CPU (ARM optimized) Caffe / pytorch / mxnet / onnx Y 3rd party Link / Official Link
MNN CPU (ARM optimized) / Mali GPU Caffe / Tensorflow / onnx Y Link
FeatherCNN CPU (ARM optimized) Caffe N Link / unofficial Link
Tngine CPU (ARM A72 optimized) Caffe / mxnet Y Link
Tensorflowlite CPU (Android optimized) Caffe2 / Tensorflow / onnx Y Link
TensorRT GPU (Volta optimized) Caffe / Tensorflow / onnx Y Link
TVM CPU (ARM optimized) / Mali GPU / FPGA onnx Y -
SNPE CPU (Qualcomm optimized) / GPU / DSP Caffe / Caffe2 / Tensorflow/ onnx Y Link
MACE CPU (ARM optimized) / Mali GPU / DSP Caffe / Tensorflow / onnx Y Link
Easy-MACE CPU (ARM optimized) / CPU (x86 optimized) Caffe / Tensorflow / onnx Y -
In-Prestissimo CPU (ARM optimized) Caffe N Link
Paddle-Mobile CPU (ARM optimized) / Mali GPU / FPGA Paddle / Caffe / onnx Y -
Anakin CPU (ARM optimized) / GPU / CPU (x86 optimized) Caffe / Fluid Y Link
Pocket-Tensor CPU (ARM/x86 optimized) Keras N Link
ZQCNN CPU Caffe / mxnet Y Link
ARM-NEON-to-x86-SSE CPU (Intel optimized) Intrinsics-Level - -
Simd CPU (all platform optimized) Intrinsics-Level - -
clDNN Intel® Processor Graphics / Iris™ Pro Graphics Caffe / Tennsorflow / mxnet / onnx Y Link

FIX16-Support

Framework Main Platform Model Compatibility Detection-Support Speed Benchmarks
ARM32-SGEMM-LIB CPU (ARM optimized) GEMM Library N Link
Yolov2-Xilinx-PYNQ FPGA (Xilinx PYNQ) Yolov2-only Y Link

INT8-Support

Framework Main Platform Model Compatibility Detection-Support Speed Benchmarks
Intel-Caffe CPU (Intel Skylake) Caffe Y Link
NCNN CPU (ARM) Caffe / pytorch / mxnet / onnx Y Link
Tensorflowlite CPU (Android) Caffe2 / Tensorflow / onnx Y Link
TensorRT GPU (Volta) Caffe / Tensorflow / onnx Y Link
Gemmlowp CPU (ARM / x86) GEMM Library - -
SNPE DSP (Quantized DLC) Caffe / Caffe2 / Tensorflow/ onnx Y Link
MACE CPU (ARM optimized) / Mali GPU / DSP Caffe / Tensorflow / onnx Y Link
In-Prestissimo CPU (ARM optimized) Caffe N Link
Paddle-Mobile CPU (ARM optimized) / Mali GPU / FPGA Paddle / Caffe / onnx Y -
Anakin CPU (ARM optimized) / GPU / CPU (x86 optimized) Caffe / Fluid Y Link
TF2 FPGA Caffe / PyTorch / Tensorflow Y Link

TERNARY-Support

Framework Main Platform Model Compatibility Detection-Support Speed Benchmarks
Gemmbitserial CPU (ARM / x86) GEMM Library - Link

BINARY-Support

Framework Main Platform Model Compatibility Detection-Support Speed Benchmarks
BMXNET CPU (ARM / x86) / GPU mxnet Y Link
DABNN CPU (ARM) Caffe / Tensorflow / onnx N Link
Espresso GPU - N Link
BNN-PYNQ FPGA (Xilinx PYNQ) - N Link
FINN FPGA (Xilinx) - N Link

*: Conv-BN-Scale-fused

About

A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published