Skip to content

Commit

Permalink
merge master (#1721)
Browse files Browse the repository at this point in the history
* Fix trt multistream logger (#1521)

* [FIX] fix trt logger

* [FIX] catch std::bad_alloc error for trt8 building

* [FIX] return null while shape_tensor size -1

* Update version.h

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>

* Update split_utils.cc (#1528)

我使用mingw32编译提示错误,因为使用mingw32编译器仍然需要空间命名
[ 99%] Building CXX object CMakeFiles/TNN.dir/source/tnn/utils/split_utils.cc.obj
D:\TNN\source\tnn\utils\split_utils.cc: In static member function 'static tnn::Status tnn::SplitUtils::SplitStr(const char*, tnn::str_arr&, const char*, bool, bool, bool, bool, bool)':
D:\TNN\source\tnn\utils\split_utils.cc:163:23: error: 'min' was not declared in this scope
             int len = min((i - cursor), subs_length - 1);
个人认为修改这样更好一下,可以适应mingw32和兼顾之前的编译器

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>

* Update README.md (#1538)

Typos

* [UPD]update QQ group (#1552)

* [BUG]fix YouTu face alignment model

* [UPD]update mean pts file logic

* [UPD]draw face points green

* [UPD]unify example controller list

* [UPD]unify example controller list

* [UPD]move blaze anchor file to resource

* [METAL]update tnn project

* [UPD]update tool onnx2coreml

* [ADD]support ShareCommandQueue between instances

* [ADD]support ShareCommandQueue between instances

* [UPD]add log message

* [UPD]transfer file half.hpp

* [UPD]fix xcode compile error with fp16

* [UPD]fix xcode compile error with fp16

* [UPD]update model type erro msg

* [FIX]fix logic error of constofshape

* [UPD]update debug message

* [FIX]fsupport int32 for neg op

* [BUG]fix init error with nil commadbuffer

* [UPD]add mac build xcode project; fix ios mac build script;

* [UPD]add mac build xcode project; fix ios mac build script;

* [ADD]add QQ group 2 of TNN

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>

* [opencl][fix] try save program cache (#1557)

* Dev roi align (#1511)

* [ARM] fix int32 blob cvt to mat

* [ARM] support roi align

* [ARM] add roi align unit test

* [ARM] add to xcodeproj

Co-authored-by: lucasktian <lucasktian@tencent.com>
Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>

* Fix arm gather and constant blob (#1564)

* [ARM][BUG] fix gather error for indice < 0

* [ARM][BUG] fix buffer to blob error without converting precision

* [ARM] update type convert in layer_norm fp16

Co-authored-by: quinnrong94 <67782915+quinnrong94@users.noreply.github.com>

* Dev add config layer (#1569)

* add config layer param to set arm conv algorithm for specific layer

Co-authored-by: powerpwang <powerpwang@outlook.com>
Co-authored-by: ealinli <ealinli@tencent.com>

* 修复 protobuf 版本升级造成的 onnx2tnn 编译失败的问题 (#1571)

* [ONNX][BUG]1. fix compile bug;

* [ONNX2TNN][BUG]1. 修复因为 protobuf 版本升级带来的编译问题;

* [ADD][TOOLS] add dynamic range quantization (#1572)

* [ADD][TOOLS] support fake quantization

* [UPD][FAKE_QUANT] fix bug

* [UPD][DOC] add fake quantization in doc

* [UPD] 1.rename fake quant to dynamic range quant 2.move dequant to net_optimizer

* [UPD] remove redundant comment

* [UPD] update comment for DynamicRangeDequant

* [DRQuant][UPD] fix namespace issue

* [DRQuant][UPD] Turn off TNN_SYMBOL_HIDE to fix ci

Co-authored-by: ealinli <ealinli@tencent.com>
Co-authored-by: Dandi Ding <bluaxe@users.noreply.github.com>
Co-authored-by: lucasktian <lucasktian@tencent.com>

* [UPD][OPENCL] opencl support using unoptimized conv (#1581)

Co-authored-by: ealinli <ealinli@tencent.com>

* [UPD][CONVERTER] lstm support sequence_lens (#1585)

Co-authored-by: ealinli <ealinli@tencent.com>

* [MODEL_CHECK][BUG]1. fix bug for dump layer(fp16); (#1567)

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>

* Bugfix from train branch (#1592)

* [BUG] fix get dims value bug when input is 1D or 2D in arm_reduce_layer_acc.cc.
* [BUG] fix Convert from NCHW to NHWC error when input is on arm device.
* [BUG] fix convert mat to blob bug when input is NC_INT32 on arm device.
* [BUG] fix tflite_converter bug when transform a activation layer.
* add nchw format condition when copy int32 mat to blob
* rollback changes on tflite_op_converter.cc

Co-authored-by: sanerzheng <sanerzheng@tencent.com>

* [UPD][OPENCL] opencl support x86 mat (#1593)

Co-authored-by: ealinli <ealinli@tencent.com>

* [CONVERTER][BUG]1. fix issue 1595; (#1596)

* [UPD][OPENCL] add ocl version check (#1601)

* [UPD][OPENCL] add ocl version check

* [UPD][OPENCL] update message for vervion check

Co-authored-by: ealinli <ealinli@tencent.com>

* [UPD][OPENCL] solve the problem that matmul, tile have incorrect results on helio p65 (#1602)

Co-authored-by: ealinli <ealinli@tencent.com>

* [UPD][DYQ] fix dynamic range quant compile error on windows (#1604)

Co-authored-by: ealinli <ealinli@tencent.com>

* [DOC][UPD] modify image links in doc (#1617)

Co-authored-by: ealinli <ealinli@tencent.com>

* remove redundant test cases (#1614)

* Fix typos. (#1626)

* Fix typos.

* Update Readme.

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>

* Interpreter change from std::map to safe_map, later one offers a const operator[] function (#1618)

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>
Co-authored-by: lucasktian <lucasktian@tencent.com>

* [UPD][OPENCL] get opencl version when GpuType is OTHER (#1636)

* [UPD][OPENCL] get opencl version when GpuType is OTHER

* [UPD][OPENCL] optimize nv gpu judgment logic

Co-authored-by: ealinli <ealinli@tencent.com>

* Patch x86 avx support (#1633)

* merge dev_vc14_m1_debug, support x86 avx

* add option to support x86 avx2 compile

* update win_x86_opencl building script

Co-authored-by: Dandiding <Dandiding@tencent.com>

* fix x86 avx2 options (#1638)

* fix typos in doc (#1634)

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>

* [X86][BUG] fix deconv layer build error (#1641)

* [OPENCL][FIX] fix conv and dwconv on some of the AMD GPUs

* [UPD][OPENCL] add coor check for conv and dwconv

* [OPENCL][FIX] fix compilation issues

* [OPENCL][UPD] optimize AMD GPU judgment logic

Co-authored-by: ealinli <ealinli@tencent.com>

* [OPENCL][UPD] fix deconv, avgpool on AMD GPU (#1646)

* [OPENCL][UPD] fix deconv and avgpool when read image

* [OPENCL][UPD] add header file for pooling

Co-authored-by: ealinli <ealinli@tencent.com>

* [OPENCL][UPD] opencl support cache on windows (#1645)

* [UPD][OPENCL] add coor check for conv and dwconv

* [OPENCL][FIX] fix compilation issues

* [OPENCL][UPD] optimize AMD GPU judgment logic

* [OPENCL][UPD] support cache on windows

* [OPENCL][UPD] fix load cache on windows

Co-authored-by: ealinli <ealinli@tencent.com>

* [DRQ][UPD] dynamic range quant model support do const folder (#1647)

* [DRQ][UPD] dynamic range quant model support do const folder

* [TOOLS][UPD] dynamic range quant updates usage

Co-authored-by: ealinli <ealinli@tencent.com>

* 1. make model_check support dynamic range quantized model; (#1653)

* [ADD][TUTORIAL] add mbv2-ssd conversion and deployment tutorial (#1640)

* [ADD][TUTORIAL] add mbv2-ssd conversion and deployment tutorial

* [TUTORIAL][UPD] update code link

* [TUTORIAL][UPD] fix typo

Co-authored-by: ealinli <ealinli@tencent.com>

* [X86][FIX] binary op support fp16 weights (#1655)

* [X86][FIX] binary op support fp16 weights

* [X86][FIX] matmul support fp16 weights

Co-authored-by: ealinli <ealinli@tencent.com>

* Feature dynamic quant fc (#1660)

* [DYNAMIC_QUANT][UPD]1. dynamic quant support inner_product layer;

* [ARM][UPD]1. arm gemm 部分情况下使用 Kahan sum 算法,以避免 fp16 累加误差;

* [FIX][CPU][TRT] Fix CPU Not OP bug, Fix TensorRT ShapeTensor Class Bug. (#1663)

* [FIX] Fix CPU Not Operator data type error.

* [FIX] Fix TensorRT ShapeTensor class ConvertTo1D() func bug

* fix _mm256_load_ps segmentation fault (#1682)

* fix _mm256_load_ps segmentation fault

* fix crash on mm256_load when  innerproduct

* use loadu instead of stride-judgement

* remove unused code

Co-authored-by: fishdai <fishdai@tencent.com>

* x86_acc & blob_converter now will consider the BlobHandle.bytes_offset (#1684)

* Dev x86 layer adapter (#1683)

* [X86] add layer acc adapter

* [X86] NULL to nullptr

* [X86][OPENVINO] add openvino adapter layer builder, fallback to cpu naive impl if there is no normal ov layer builder

* [X86][OPENVINO] fix hard code of ov precision

Co-authored-by: anonymous <anonymous@mail.org>

* [ARM] fix arm cross compile error caused by float-abi (#1678)

* avoid nullptr in IsSupport (#1685)

* [UPD][TOOLS] 1.increase subs_length 2.align model support bool and int32 input 3. fix gather and onehot convert 4. gather_nd support indices_shape[-1] < r (#1686)

Co-authored-by: ealinli <ealinli@tencent.com>

* Dev metal ngray (#1693)

* [METAL] metal support ngray input mat

* [METAL]fix bytes_size

* [COREML] fix dynamic quantization model about coreml

Co-authored-by: jacinhu <jacinhu@tencent.com>
Co-authored-by: darrenyao87 <62542779+darrenyao87@users.noreply.github.com>

* [UPD][DRQ] support quantizing matmul's const weight (#1698)

* [UPD][DRQ] support quantizing matmul's const weight

* [UPD][DRQ] add scale check in constant map

Co-authored-by: ealinli <ealinli@tencent.com>

* [FIX] fix compile macos framework (#1687)

Co-authored-by: darrenyao87 <62542779+darrenyao87@users.noreply.github.com>

* Optimize dynamic range quantize (#1699)

* [DynamicRangeQuantize][UPD]1. 添加了根据权重分布判断是否量化的逻辑;

* [DynamicQuantization][UPD]1. dynamic_range_quantization support TNN fp16 model;

* [DRQ][UPD]1. 修复了 model_check_android.sh 脚本中指定 reference file,但是推理没有用到的 bug;2. 优化了 dynamic_range_quantization 中的部分代码;

* [DRQ][UPD]1.fix conflict with merge master code;

Co-authored-by: ealinli <37806708+1627180283@users.noreply.github.com>

* Fix windows x86 build (#1697)

* [FIX] remove nanodet for windows

* remove ninga compile for some bug

* fix x86 mat type register macro name

* fix x86 matmul with 2 inputs

Co-authored-by: darrenyao87 <62542779+darrenyao87@users.noreply.github.com>

* [METAL] fix stride slice crach when dims is 2 (#1701)

Co-authored-by: darrenyao87 <62542779+darrenyao87@users.noreply.github.com>

* [mac] 1. FIX X86 and ARM conflict; 2. ADD ARM arch on intel cpu (You can use ARM if rosetta-X86 crash).  3. Use ios project build/profile M1-Mac. (#1700)

Co-authored-by: gennyxu <gennyxu@tencent.com>
Co-authored-by: lucasktian <lucasktian@tencent.com>

* [iOS][UPD]1. add missing file for xcode project; (#1705)

* [BUG]fix coreml error of slicev2、padv2 and matmul; (#1703)

* [BUG]fix YouTu face alignment model

* [UPD]update mean pts file logic

* [UPD]draw face points green

* [UPD]unify example controller list

* [UPD]unify example controller list

* [UPD]move blaze anchor file to resource

* [METAL]update tnn project

* [UPD]update tool onnx2coreml

* [ADD]support ShareCommandQueue between instances

* [ADD]support ShareCommandQueue between instances

* [UPD]add log message

* [UPD]transfer file half.hpp

* [UPD]fix xcode compile error with fp16

* [UPD]fix xcode compile error with fp16

* [UPD]update model type erro msg

* [FIX]fix logic error of constofshape

* [UPD]update debug message

* [FIX]fsupport int32 for neg op

* [BUG]fix init error with nil commadbuffer

* [UPD]add mac build xcode project; fix ios mac build script;

* [UPD]add mac build xcode project; fix ios mac build script;

* [ADD]add QQ group 2 of TNN

* [BUG]fix dynamic dequant error; fix arm pad error;

* [BUG]support coreml padv2

* [BUG]fix ccoreml matmul error when it has const input blob

* [BUG]fix coreml slicev2

* [UPD]add convert logic of swish

* [BUG]fix  error cpu error for x86 mac

* [UPD]support fusion for gemm + bn

* [UPD]add convert logic of swish

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>
Co-authored-by: lucasktian <lucasktian@tencent.com>

* [UPD]update merge logic for swish groupnorm deconv (#1708)

* [BUG]fix YouTu face alignment model

* [UPD]update mean pts file logic

* [UPD]draw face points green

* [UPD]unify example controller list

* [UPD]unify example controller list

* [UPD]move blaze anchor file to resource

* [METAL]update tnn project

* [UPD]update tool onnx2coreml

* [ADD]support ShareCommandQueue between instances

* [ADD]support ShareCommandQueue between instances

* [UPD]add log message

* [UPD]transfer file half.hpp

* [UPD]fix xcode compile error with fp16

* [UPD]fix xcode compile error with fp16

* [UPD]update model type erro msg

* [FIX]fix logic error of constofshape

* [UPD]update debug message

* [FIX]fsupport int32 for neg op

* [BUG]fix init error with nil commadbuffer

* [UPD]add mac build xcode project; fix ios mac build script;

* [UPD]add mac build xcode project; fix ios mac build script;

* [ADD]add QQ group 2 of TNN

* [BUG]fix dynamic dequant error; fix arm pad error;

* [BUG]support coreml padv2

* [BUG]fix ccoreml matmul error when it has const input blob

* [BUG]fix coreml slicev2

* [UPD]add convert logic of swish

* [BUG]fix  error cpu error for x86 mac

* [UPD]support fusion for gemm + bn

* [UPD]add convert logic of swish

* [UPD]support fusion for deconv+add and deconv+add+bn

* [UPD]add aliyun disk link for tnn models

* [UPD]support fusion for group norm

* [UPD]support fusion for swish

Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>
Co-authored-by: lucasktian <lucasktian@tencent.com>

* [DRQ][BUG]1. fix bug for max_values; (#1716)

* Hotfix m1 build (#1715)

* fix apple m1 clang 13.1 compile error

* fix unit test compile error

Co-authored-by: quinnrong <quinnrong@quinnrongs-MacBook-Pro.local>
Co-authored-by: ealinli <37806708+1627180283@users.noreply.github.com>

Co-authored-by: shenpenwang <41420892+Maosquerade@users.noreply.github.com>
Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com>
Co-authored-by: sxj731533730 <sxj731533730@gmail.com>
Co-authored-by: Yulv-git <34329208+Yulv-git@users.noreply.github.com>
Co-authored-by: darrenyao87 <62542779+darrenyao87@users.noreply.github.com>
Co-authored-by: quinnrong94 <67782915+quinnrong94@users.noreply.github.com>
Co-authored-by: lucasktian <lucasktian@tencent.com>
Co-authored-by: powerpwang <72859430+powerpwang@users.noreply.github.com>
Co-authored-by: ealinli <37806708+1627180283@users.noreply.github.com>
Co-authored-by: powerpwang <powerpwang@outlook.com>
Co-authored-by: ealinli <ealinli@tencent.com>
Co-authored-by: Dandi Ding <bluaxe@users.noreply.github.com>
Co-authored-by: saner zheng <zqawszqaws@126.com>
Co-authored-by: sanerzheng <sanerzheng@tencent.com>
Co-authored-by: Feng Shijie <j514681085@icloud.com>
Co-authored-by: Dandiding <Dandiding@tencent.com>
Co-authored-by: FeiGeChuanShu <774074168@qq.com>
Co-authored-by: seanxcwang <66675860+seanxcwang@users.noreply.github.com>
Co-authored-by: doxutx <92915535+doxutx@users.noreply.github.com>
Co-authored-by: kumbayaco <xyu.dai@gmail.com>
Co-authored-by: fishdai <fishdai@tencent.com>
Co-authored-by: anonymous <anonymous@mail.org>
Co-authored-by: jacinhu <jacinhu@tencent.com>
Co-authored-by: XDC <196890111@qq.com>
Co-authored-by: gennyxu <gennyxu@tencent.com>
Co-authored-by: quinnrong <quinnrong@quinnrongs-MacBook-Pro.local>
  • Loading branch information
27 people committed Jul 8, 2022
1 parent 6d62fe3 commit 35f2507
Show file tree
Hide file tree
Showing 302 changed files with 7,665 additions and 923 deletions.
23 changes: 19 additions & 4 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ option(TNN_BUILD_BENCHMARK_TEST_LIB_ENABLE "Enable Build Benchmark Test Lib" OFF
option(TNN_GLIBCXX_USE_CXX11_ABI_ENABLE "Enable Use CXX11 ABI" ON)
option(TNN_METAL_FLOAT32 "Enable Metal Float32" OFF)
option(TNN_COREML_FLOAT32 "Enable Float32 CoreML Model" ON)
option(TNN_DYNAMIC_RANGE_QUANTIZATION_ENABLE "Enable Dynamic Range Quantization" OFF)

set(TNN_USE_GFLAGS OFF)

Expand Down Expand Up @@ -139,6 +140,10 @@ if(TNN_QUANTIZATION_ENABLE OR TNN_MODEL_CHECK_ENABLE)
add_definitions(-DFORWARD_CALLBACK_ENABLE)
endif()

if (TNN_DYNAMIC_RANGE_QUANTIZATION_ENABLE)
set(TNN_SYMBOL_HIDE OFF)
endif()

if(TNN_QUANTIZATION_ENABLE OR TNN_UNIT_TEST_ENABLE)
add_definitions(-DGET_INTERP_ENABLE)
endif()
Expand Down Expand Up @@ -260,6 +265,7 @@ message(STATUS "\tModel Converter:\t${TNN_CONVERTER_ENABLE}")
message(STATUS "\tONNX2TNN Converter:\t${TNN_ONNX2TNN_ENABLE}")
message(STATUS "\tTNN2MEM:\t${TNN_TNN2MEM_ENABLE}")
message(STATUS "\tBENCHMARK Test Lib:\t${TNN_BUILD_BENCHMARK_TEST_LIB_ENABLE}")
message(STATUS "\tDynamic Range Quantization:\t${TNN_DYNAMIC_RANGE_QUANTIZATION_ENABLE}")

include_directories(include)
include_directories(source)
Expand All @@ -286,11 +292,19 @@ endif()
if(SYSTEM.Linux AND CMAKE_SYSTEM_PROCESSOR MATCHES "arm" AND ANDROID_API_LEVAL)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_GLIBCXX_USE_C99_MATH_TR1")
add_definitions(-D__ANDROID_API__=${ANDROID_API_LEVAL})
add_definitions( -mfloat-abi=softfp )
endif()

if(SYSTEM.Windows AND TNN_DYNAMIC_RANGE_QUANTIZATION_ENABLE)
set(TNN_BUILD_SHARED OFF)
endif()

if(TNN_X86_ENABLE)
# compile with avx2 by default
option(TNN_X86_AVX2_ENABLE "Enable X86 AVX2" ON)
add_subdirectory(source/tnn/device/x86)
set(TARGET_OBJECTS ${TARGET_OBJECTS} "$<TARGET_OBJECTS:TNNX86>")
set(TARGET_OBJECTS ${TARGET_OBJECTS} "$<TARGET_OBJECTS:TNNX86ACC>")
endif()

if(TNN_CPU_ENABLE)
Expand All @@ -302,9 +316,6 @@ if(TNN_ARM_ENABLE)
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64" OR CMAKE_SYSTEM_PROCESSOR STREQUAL "arm64")

elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "arm")
if(SYSTEM.Linux)
add_definitions( -mfloat-abi=softfp )
endif()
add_definitions( -mfpu=neon )
endif()
add_subdirectory(source/tnn/device/arm)
Expand Down Expand Up @@ -412,7 +423,7 @@ elseif(SYSTEM.Windows)
include(platforms/windows/CMakeLists.txt)
endif()

if (TNN_TEST_ENABLE OR TNN_CONVERTER_ENABLE OR TNN_MODEL_CHECK_ENABLE)
if (TNN_TEST_ENABLE OR TNN_CONVERTER_ENABLE OR TNN_MODEL_CHECK_ENABLE OR TNN_DYNAMIC_RANGE_QUANTIZATION_ENABLE)
set(TNN_USE_GFLAGS ON)
endif ()

Expand Down Expand Up @@ -451,3 +462,7 @@ endif()
if(TNN_EVALUATION_ENABLE)
add_subdirectory(tools/evaluation)
endif()

if(TNN_DYNAMIC_RANGE_QUANTIZATION_ENABLE)
add_subdirectory(tools/dynamic_range_quantization)
endif()
28 changes: 17 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,23 @@
[中文版本](README_CH.md)
<div align=left ><img src="https://gitee.com/darren3d/tnn-resource/raw/master/TNN.png"/>
<div align=left ><img src="https://github.com/darrenyao87/tnn-models/raw/master/TNN.png"/>

## Introduction

TNN: A high-performance, lightweight neural network inference framework open sourced by Tencent Youtu Lab. It also has many outstanding advantages such as cross-platform, high performance, model compression, and code tailoring. The TNN framework further strengthens the support and performance optimization of mobile devices on the basis of the original Rapidnet and ncnn frameworks. At the same time, it refers to the high performance and good scalability characteristics of the industry's mainstream open source frameworks, and expands the support for X86 and NV GPUs. On the mobile phone, TNN has been used by many applications such as mobile QQ, weishi, and Pitu. As a basic acceleration framework for Tencent Cloud AI, TNN has provided acceleration support for the implementation of many businesses. Everyone is welcome to participate in the collaborative construction to promote the further improvement of the TNN reasoning framework.
TNN: A high-performance, lightweight neural network inference framework open sourced by Tencent Youtu Lab. It also has many outstanding advantages such as cross-platform, high performance, model compression, and code tailoring. The TNN framework further strengthens the support and performance optimization of mobile devices on the basis of the original Rapidnet and ncnn frameworks. At the same time, it refers to the high performance and good scalability characteristics of the industry's mainstream open source frameworks, and expands the support for X86 and NV GPUs. On the mobile phone, TNN has been used by many applications such as mobile QQ, weishi, and Pitu. As a basic acceleration framework for Tencent Cloud AI, TNN has provided acceleration support for the implementation of many businesses. Everyone is welcome to participate in the collaborative construction to promote the further improvement of the TNN inference framework.

## Effect Example

Face Detection(blazeface) | Object Detection(yolov5s) | Face Alignment<br>(from Tencent Youtu Lab) | Hair Segmentation<br>(from Tencent Guangying Lab)
:-------------------------: | :------: | :------: | :------:
[![face_detection](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/face_detection.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/blazeface) <br> model link: [tflite](https://github.com/google/mediapipe/blob/master/mediapipe/models/face_detection_front.tflite) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/blazeface) | [![yolov5](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/object-detection.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/yolov5) <br> model link: [onnx](https://github.com/ultralytics/yolov5/blob/master/models/export.py) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/yolov5) | [![youtu_face_alignment](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/face_alignment.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/youtu_face_alignment) <br> model link: [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/youtu_face_alignment) | [![hair_segmentation](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/hair_seg_red.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/hair_segmentation) <br> model link: [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/hair_segmentation)
Face Detection(blazeface) | Face Alignment<br>(from Tencent Youtu Lab) | Hair Segmentation<br>(from Tencent Guangying Lab)
:-------------------------: | :------: | :------:
[![face_detection](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/face_detection.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/blazeface) <br> 模型链接: [tflite](https://github.com/google/mediapipe/blob/master/mediapipe/models/face_detection_front.tflite) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/blazeface) | [![youtu_face_alignment](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/face_alignment.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/youtu_face_alignment) <br> 模型链接: [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/youtu_face_alignment) | [![hair_segmentation](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/hair_seg_red.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/hair_segmentation) <br> 模型链接: [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/hair_segmentation)

Pose Estimation<br>(from Tencent Guangliu) | Pose Estimation<br>(blazepose) | Chinese OCR | Reading Comprehension
:--------------------------: | :------: | :------: | :------:
[![skeleton](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/skeleton_guangliu.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/skeleton) <br> model link: [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/skeleton) | [![blazepose](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/skeleton_blazepose.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/blazepose) <br> model link: [tflite](https://github.com/google/mediapipe/blob/master/mediapipe/modules/pose_landmark/pose_landmark_full_body.tflite) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/blazepose) | [![chinese-ocr](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/chinese-ocr.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/chinese-ocr) <br> model link: [onnx](https://github.com/DayBreak-u/chineseocr_lite/tree/onnx/models) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/chinese-ocr) | [![bertsquad10](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/bert_squad.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/bertsquad10) <br> model link: [onnx](https://github.com/onnx/models/blob/master/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/bertsquad10)
Pose Estimation<br>(from Tencent Guangliu) | Pose Estimation<br>(blazepose) | Chinese OCR
:--------------------------: | :------: | :------:
[![skeleton](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/skeleton_guangliu.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/skeleton) <br> 模型链接: [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/skeleton) | [![blazepose](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/skeleton_blazepose.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/blazepose) <br> 模型链接: [tflite](https://github.com/google/mediapipe/blob/master/mediapipe/modules/pose_landmark/pose_landmark_full_body.tflite) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/blazepose) | [![chinese-ocr](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/chinese-ocr.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/chinese-ocr) <br> 模型链接: [onnx](https://github.com/DayBreak-u/chineseocr_lite/tree/onnx/models) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/chinese-ocr)

Object Detection(yolov5s) | Object Detection(MobilenetV2-SSD) | Reading Comprehension
:-------------------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| :------:
[![yolov5](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/object-detection.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/yolov5) <br> 模型链接: [onnx](https://github.com/ultralytics/yolov5/blob/master/models/export.py) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/yolov5) | [![mobilenetv2_ssd](tutorial/mobilenet_v2_ssd/imgs/mobilenetv2_ssd_tf_fix_box.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/mobilenet_v2-ssd) <br> 模型链接: [tensorflow](http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/mobilenet_v2-ssd) | [![bertsquad10](https://raw.githubusercontent.com/darrenyao87/tnn-models/master/doc/demo/bert_squad.gif)](https://github.com/darrenyao87/tnn-models/tree/master/model/bertsquad10) <br> 模型链接: [onnx](https://github.com/onnx/models/blob/master/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx) [tnn](https://github.com/darrenyao87/tnn-models/tree/master/model/bertsquad10)

<small>Chinese OCR demo is the TNN implementation of [chineseocr_lite](https://github.com/DayBreak-u/chineseocr_lite) project. It is lightweight and supports tilted, rotated and vertical text recognition.</small>

Expand Down Expand Up @@ -67,7 +71,7 @@ At present, TNN has been launched in various major businesses, and its following

* TNN architecture diagram:

<div><img src="https://gitee.com/darren3d/tnn-resource/raw/master/doc/en/imgs/tnn_architect.jpg"/>
<div><img src="https://github.com/darrenyao87/tnn-models/raw/master/doc/en/imgs/tnn_architect.jpg"/>

* TNN supports TensorFlow, Pytorch, MxNet, Caffe, and other training frameworks through ONNX, leveraging the continuous improvement of the ONNX open-source society.
Currently, TNN supports 100+ ONNX operators, consisting of most of the mainstream CNN, NLP operators needed.
Expand All @@ -90,6 +94,8 @@ At present, TNN has been launched in various major businesses, and its following
* [Model Visualization Netron](https://lutzroeder.github.io/netron/)
* [Performance Analysis](doc/en/development/profiling_en.md)
* [Model Alignment](doc/en/development/model_check_en.md)
* [Tutorial]()
* [TNN model conversion and deployment for SSD](tutorial/mobilenet_v2_ssd/doc/ssd_conversion_and_deployment_en.md)

## API Document
* [API call](doc/en/user/api_en.md)
Expand Down Expand Up @@ -127,7 +133,7 @@ TNN referenced the following projects:

* Everyone is welcome to participate to build the best inference framework in the industry.

* Technical Discussion QQ Group: 913940506 Answer: TNN
* Technical Discussion QQ Group: 704900079 Answer: TNN

* Scan the QR code to join the TNN discussion group:
<div align=left ><img src="https://gitee.com/darren3d/tnn-resource/raw/master/TNN-QQ.png"/>
<div align=left ><img src="TNN-QQ.png"/>
Loading

0 comments on commit 35f2507

Please sign in to comment.