Skip to content

ustc-wlw/tengine_v1.12

Repository files navigation

Tengine Overview

GitHub license Build Status Test Status

Tengine, developed by OPEN AI LAB, is an AI application development platform for AIoT scenarios launched by OPEN AI LAB, which is dedicated to solving the fragmentation problem of aiot industrial chain and accelerating the landing of AI industrialization. Tengine is specially designed for AIoT scenarios, and it has several features, such as cross platform, heterogeneous scheduling, chip bottom acceleration, ultra light weight and independent, and complete development and deployment tool chain. Tengine is compatible with a variety of operating systems and deep learning algorithm framework, which simplifies and accelerates the rapid migration of scene oriented AI algorithm on embedded edge devices, as well as the actual application deployment;

Tengine is composed of five modules: core/operator/serializer/executor/driver.

  • core provides the basic components and functionalities of the system.
  • operator defines the schema of operators, such as convolution, relu, pooling, etc. al. Here is the current support operator list.
  • serializer is to load the saved model. The serializer framework is extensible to support different format, including the customized one. Caffe/ONNX/Tensorflow/MXNet and Tengine models can be loaded directly by Tengine.
  • executor implements the code to run graph and operators. Current version provides a highly optimized implementation for multi A72 cores.
  • driver is the adapter of real H/W and provides service to device executor by HAL API. It is possible for single driver to create multiple devices.

Build and Install

please refer to Wiki

Tengine examples and model zoo

please visit examples for demos on classification/detection and download models from Tengine model zoo (psw: hhgc)

tengine applications is a project for sharing android/linux applications powered by Tengine

Communication && Tech Support

Benchmark

Test on RK3399-1*A72

Model fp32 int8-hybrid int8-e2e
Squeezenet v1.1 55.3ms 48.6ms 44.6ms
Mobilenet v1 108.7ms 74.6ms 64.2ms

More Benchmark data to be added.

Roadmap

2020.4 updated

Feature
  • More examples
  • Netron support Tengine model .tmfile
  • New compile configuration file
  • Easy to use C++ API
  • Easy to use Python API
  • Support more ops of ONNX(PyTorch)
Optimization
  • x86 platform ops

source website: http://ftp.openailab.net.cn/Tengine_android_build/

About

移动ai框架tengine工程,新增axpy、poolingmask、upsample和refinedet输出层refine_output算子

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published