Skip to content

Latest commit

 

History

History
1087 lines (890 loc) · 71.1 KB

README.zh.md

File metadata and controls

1087 lines (890 loc) · 71.1 KB

🍅🍅Lite.AI.ToolKit: 一个开箱即用的C++ AI模型工具箱


English | 中文文档 | MacOS | Linux | Windows

🍅🍅Lite.AI.ToolKit: 一个轻量级的C++ AI模型工具箱,用户友好(还行吧),开箱即用。已经包括 70+ 流行的开源模型。这是一个根据个人兴趣整理的C++工具箱,, 涵盖目标检测人脸检测人脸识别语义分割抠图等领域。详见 Model ZooONNX HubMNN HubTNN HubNCNN Hub. [若是有用,❤️不妨给个⭐️🌟支持一下吧,感谢支持~]

核心特征及规划👏👋

  • 用户友好,开箱即用。 使用简单一致的调用语法,如lite::cv::Type::Class,详见examples.
  • 少量依赖,构建容易。 目前, 默认只依赖 OpenCVONNXRuntime,详见build
  • 众多的算法模块,且持续更新。 目前,包括 10+ 算法模块、70+ 流行的开源模型以及 500+ 权重文件

重要更新 !!

Date Model C++ Paper Code Awesome Type
【2022/01/19】 YOLO5Face [link] [arXiv 2021] [code] face::detect
【2022/01/07】 SCRFD [link] [CVPR 2021] [code] face::detect
【2021/12/27】 NanoDetPlus [link] [blog] [code] detection
【2021/12/08】 MGMatting [link] [CVPR 2021] [code] matting
【2021/11/11】 YoloV5_V_6_0 [link] [doi] [code] detection
【2021/10/26】 YoloX_V_0_1_1 [link] [arXiv 2021] [code] detection
【2021/10/02】 NanoDet [link] [blog] [code] detection
【2021/09/20】 RobustVideoMatting [link] [WACV 2022] [code] matting
【2021/09/02】 YOLOP [link] [arXiv 2021] [code] detection

模型支持矩阵

  • / = 暂不支持.
  • ✅ = 可以运行,且官方支持.
  • ✔️ = 可以运行,但非官方支持.
  • ❔ = 计划中,但不会很快实现,也许几个月后.
Class Size Type Demo ONNXRuntime MNN NCNN TNN MacOS Linux Windows Android
YoloV5 28M detection demo ✔️ ✔️
YoloV3 236M detection demo / / / ✔️ ✔️ /
TinyYoloV3 33M detection demo / / / ✔️ ✔️ /
YoloV4 176M detection demo / / / ✔️ ✔️ /
SSD 76M detection demo / / / ✔️ ✔️ /
SSDMobileNetV1 27M detection demo / / / ✔️ ✔️ /
YoloX 3.5M detection demo ✔️ ✔️
TinyYoloV4VOC 22M detection demo / / / ✔️ ✔️ /
TinyYoloV4COCO 22M detection demo / / / ✔️ ✔️ /
YoloR 39M detection demo ✔️ ✔️
ScaledYoloV4 270M detection demo / / / ✔️ ✔️ /
EfficientDet 15M detection demo / / / ✔️ ✔️ /
EfficientDetD7 220M detection demo / / / ✔️ ✔️ /
EfficientDetD8 322M detection demo / / / ✔️ ✔️ /
YOLOP 30M detection demo ✔️ ✔️
NanoDet 1.1M detection demo ✔️ ✔️
NanoDetPlus 4.5M detection demo ✔️ ✔️
NanoDetEffi... 12M detection demo ✔️ ✔️
YoloX_V_0_1_1 3.5M detection demo ✔️ ✔️
YoloV5_V_6_0 7.5M detection demo ✔️ ✔️
GlintArcFace 92M faceid demo ✔️ ✔️
GlintCosFace 92M faceid demo ✔️ ✔️ /
GlintPartialFC 170M faceid demo ✔️ ✔️ /
FaceNet 89M faceid demo ✔️ ✔️ /
FocalArcFace 166M faceid demo ✔️ ✔️ /
FocalAsiaArcFace 166M faceid demo ✔️ ✔️ /
TencentCurricularFace 249M faceid demo ✔️ ✔️ /
TencentCifpFace 130M faceid demo ✔️ ✔️ /
CenterLossFace 280M faceid demo ✔️ ✔️ /
SphereFace 80M faceid demo ✔️ ✔️ /
PoseRobustFace 92M faceid demo / / / ✔️ ✔️ /
NaivePoseRobustFace 43M faceid demo / / / ✔️ ✔️ /
MobileFaceNet 3.8M faceid demo ✔️ ✔️
CavaGhostArcFace 15M faceid demo ✔️ ✔️
CavaCombinedFace 250M faceid demo ✔️ ✔️ /
MobileSEFocalFace 4.5M faceid demo ✔️ ✔️
RobustVideoMatting 14M matting demo / ✔️ ✔️
MGMatting 113M matting demo / ✔️ ✔️ /
UltraFace 1.1M face::detect demo ✔️ ✔️
RetinaFace 1.6M face::detect demo ✔️ ✔️
FaceBoxes 3.8M face::detect demo ✔️ ✔️
SCRFD 2.5M face::detect demo ✔️ ✔️
YOLO5Face 4.8M face::detect demo ✔️ ✔️
PFLD 1.0M face::align demo ✔️ ✔️
PFLD98 4.8M face::align demo ✔️ ✔️
MobileNetV268 9.4M face::align demo ✔️ ✔️
MobileNetV2SE68 11M face::align demo ✔️ ✔️
PFLD68 2.8M face::align demo ✔️ ✔️
FaceLandmark1000 2.0M face::align demo ✔️ ✔️
FSANet 1.2M face::pose demo / ✔️ ✔️
AgeGoogleNet 23M face::attr demo ✔️ ✔️
GenderGoogleNet 23M face::attr demo ✔️ ✔️
EmotionFerPlus 33M face::attr demo ✔️ ✔️
VGG16Age 514M face::attr demo ✔️ ✔️ /
VGG16Gender 512M face::attr demo ✔️ ✔️ /
SSRNet 190K face::attr demo / ✔️ ✔️
EfficientEmotion7 15M face::attr demo ✔️ ✔️
EfficientEmotion8 15M face::attr demo ✔️ ✔️
MobileEmotion7 13M face::attr demo ✔️ ✔️
ReXNetEmotion7 30M face::attr demo / ✔️ ✔️ /
EfficientNetLite4 49M classification demo / ✔️ ✔️ /
ShuffleNetV2 8.7M classification demo ✔️ ✔️
DenseNet121 30.7M classification demo ✔️ ✔️ /
GhostNet 20M classification demo ✔️ ✔️
HdrDNet 13M classification demo ✔️ ✔️
IBNNet 97M classification demo ✔️ ✔️ /
MobileNetV2 13M classification demo ✔️ ✔️
ResNet 44M classification demo ✔️ ✔️ /
ResNeXt 95M classification demo ✔️ ✔️ /
DeepLabV3ResNet101 232M segmentation demo ✔️ ✔️ /
FCNResNet101 207M segmentation demo ✔️ ✔️ /
FastStyleTransfer 6.4M style demo ✔️ ✔️
Colorizer 123M colorization demo / ✔️ ✔️ /
SubPixelCNN 234K resolution demo / ✔️ ✔️

目录

1. 编译

  • MacOS: 从Lite.AI.ToolKit 源码编译MacOS下的动态库。需要注意的是Lite.AI.ToolKit 使用onnxruntime作为默认的后端,因为onnxruntime支持大部分onnx的原生算子,具有更高的易用性。如何编译Linux和Windows版本?点击 ▶️ 查看。
    git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git  # 最新源码
    cd lite.ai.toolkit && sh ./build.sh  # 对于MacOS, 你可以直接利用本项目包含的OpenCV, ONNXRuntime, MNN, NCNN and TNN依赖库,无需重新编译
💡️ Linux 和 Windows

Linux 和 Windows

⚠️ Lite.AI.ToolKit 的发行版本目前不直接支持Linux和Windows,你需要从下载Lite.AI.ToolKit的源码进行构建。首先,你需要下载(如果有官方编译好的发行版本的话)或编译OpenCVONNXRuntime 和其他你需要的推理引擎,如MNN、NCNN、TNN,然后把它们的头文件分别放入各自对应的文件夹,或者直接使用本项目提供的头文件。本项目的依赖库头文件是直接从相应的官方库拷贝而来的,但不同操作系统下的动态库需要重新编译或下载,MacOS用户可以直接使用本项目提供的各个依赖库的动态库。

  • lite.ai.toolkit/opencv2
      cp -r you-path-to-downloaded-or-built-opencv/include/opencv4/opencv2 lite.ai.toolkit/opencv2
  • lite.ai.toolkit/onnxruntime
      cp -r you-path-to-downloaded-or-built-onnxruntime/include/onnxruntime lite.ai.toolkit/onnxruntime
  • lite.ai.toolkit/MNN
      cp -r you-path-to-downloaded-or-built-MNN/include/MNN lite.ai.toolkit/MNN
  • lite.ai.toolkit/ncnn
      cp -r you-path-to-downloaded-or-built-ncnn/include/ncnn lite.ai.toolkit/ncnn
  • lite.ai.toolkit/tnn
      cp -r you-path-to-downloaded-or-built-TNN/include/tnn lite.ai.toolkit/tnn

然后把各个依赖库拷贝到lite.ai.toolkit/lib 文件夹。 请参考依赖库的编译文档1

  • lite.ai.toolkit/lib

      cp you-path-to-downloaded-or-built-opencv/lib/*opencv* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-onnxruntime/lib/*onnxruntime* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-MNN/lib/*MNN* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-ncnn/lib/*ncnn* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-TNN/lib/*TNN* lite.ai.toolkit/lib
  • Windows: 你可以参考issue#6 ,讨论了常见的编译问题。

  • Linux: 参考MacOS下的编译,替换Linux版本的依赖库即可。Linux下的发行版本将会在近期添加 ~ issue#2

  • 令人开心的消息!!! : 🚀 你可以直接下载最新的ONNXRuntime官方构建的动态库,包含Windows, Linux, MacOS and Arm的版本!!! CPU和GPU的版本均可获得。不需要再从源码进行编译了,nice。可以从v1.8.1 下载最新的动态库. 我目前在Lite.AI.ToolKit中用的是1.7.0,你可以从v1.7.0 下载, 但1.8.1应该也是可行的。对于OpenCV,请尝试从源码构建(Linux) 或者 直接从OpenCV 4.5.3 下载官方编译好的动态库(Windows). 然后把头文件和依赖库放入上述的文件夹中.

  • Windows GPU 兼容性: 详见issue#10.

  • Linux GPU 兼容性: 详见issue#97.

🔑️ 如何链接Lite.AI.ToolKit动态库?
  • 你可参考以下的CMakeLists.txt设置来链接动态库.
cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)

set(CMAKE_CXX_STANDARD 11)

# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})

set(OpenCV_LIBS
        opencv_highgui
        opencv_core
        opencv_imgcodecs
        opencv_imgproc
        opencv_video
        opencv_videoio
        )
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)

add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
        lite.ai.toolkit
        onnxruntime
        MNN  # need, if built lite.ai.toolkit with ENABLE_MNN=ON,  default OFF
        ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF 
        TNN  # need, if built lite.ai.toolkit with ENABLE_TNN=ON,  default OFF 
        ${OpenCV_LIBS})  # link lite.ai.toolkit & other libs.
cd ./build/lite.ai.toolkit/lib && otool -L liblite.ai.toolkit.0.0.1.dylib 
liblite.ai.toolkit.0.0.1.dylib:
        @rpath/liblite.ai.toolkit.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1)
        @rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2)
        @rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0)
        ...
cd ../ && tree .
├── bin
├── include
│   ├── lite
│   │   ├── backend.h
│   │   ├── config.h
│   │   └── lite.h
│   └── ort
└── lib
    └── liblite.ai.toolkit.0.0.1.dylib
  • 运行已经编译好的examples:
cd ./build/lite.ai.toolkit/bin && ls -lh | grep lite
-rwxr-xr-x  1 root  staff   301K Jun 26 23:10 liblite.ai.toolkit.0.0.1.dylib
...
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov4
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov5
...
./lite_yolov5
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
...
detected num_anchors: 25200
generate_bboxes num: 66
Default Version Detected Boxes Num: 5

为了链接lite.ai.toolkit动态库,你需要确保OpenCV and onnxruntime也被正确地链接。你可以在CMakeLists.txt 中找到一个简单且完整的,关于如何正确地链接Lite.AI.ToolKit动态库的应用案例。

2. 模型下载

Lite.AI.ToolKit 目前包括 70+ 流行的开源模型以及 500+ 文件,大部分文件是我自己转换的。你可以通过lite::cv::Type::Class 语法进行调用,如 lite::cv::detection::YoloV5。更多的细节见Examples for Lite.AI.ToolKit。注意,由于Google Driver(15G)的存储限制,我无法上传所有的模型文件,国内的小伙伴请使用百度云盘。

File Baidu Drive Google Drive Docker Hub Hub (Docs)
ONNX Baidu Drive code: 8gin Google Drive ONNX Docker v0.1.22.01.08 (28G) ONNX Hub
MNN Baidu Drive code: 9v63 MNN Docker v0.1.22.01.08 (11G) MNN Hub
NCNN Baidu Drive code: sc7f NCNN Docker v0.1.22.01.08 (9G) NCNN Hub
TNN Baidu Drive code: 6o6k TNN Docker v0.1.22.01.08 (11G) TNN Hub
  docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08  # (28G)
  docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08   # (11G)
  docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08  # (9G)
  docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08   # (11G)
❇️ 命名空间和Lite.AI.ToolKit算法模块的对应关系

命名空间和Lite.AI.ToolKit算法模块的对应关系

Namepace Details
lite::cv::detection Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc. ✅
lite::cv::classification Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc. ✅
lite::cv::faceid Face Recognition. ArcFace, CosFace, CurricularFace, etc. ❇️
lite::cv::face Face Analysis. detect, align, pose, attr, etc. ❇️
lite::cv::face::detect Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. ❇️
lite::cv::face::align Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. ❇️
lite::cv::face::pose Head Pose Estimation. FSANet, etc. ❇️
lite::cv::face::attr Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. ❇️
lite::cv::segmentation Object Segmentation. Such as FCN, DeepLabV3, etc. ❇️ ️
lite::cv::style Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. ⚠️
lite::cv::matting Image Matting. Object and Human matting. ❇️ ️
lite::cv::colorization Colorization. Make Gray image become RGB. ⚠️
lite::cv::resolution Super Resolution. ⚠️

Lite.AI.ToolKit的类与权重文件对应关系说明

Lite.AI.ToolKit的类与权重文件对应关系说明,可以在lite.ai.toolkit.hub.onnx.md 中找到。比如, lite::cv::detection::YoloV5lite::cv::detection::YoloX 的权重文件为:

Class Pretrained ONNX Files Rename or Converted From (Repo) Size
lite::cv::detection::YoloV5 yolov5l.onnx yolov5 (🔥🔥💥↑) 188Mb
lite::cv::detection::YoloV5 yolov5m.onnx yolov5 (🔥🔥💥↑) 85Mb
lite::cv::detection::YoloV5 yolov5s.onnx yolov5 (🔥🔥💥↑) 29Mb
lite::cv::detection::YoloV5 yolov5x.onnx yolov5 (🔥🔥💥↑) 351Mb
lite::cv::detection::YoloX yolox_x.onnx YOLOX (🔥🔥!!↑) 378Mb
lite::cv::detection::YoloX yolox_l.onnx YOLOX (🔥🔥!!↑) 207Mb
lite::cv::detection::YoloX yolox_m.onnx YOLOX (🔥🔥!!↑) 97Mb
lite::cv::detection::YoloX yolox_s.onnx YOLOX (🔥🔥!!↑) 34Mb
lite::cv::detection::YoloX yolox_tiny.onnx YOLOX (🔥🔥!!↑) 19Mb
lite::cv::detection::YoloX yolox_nano.onnx YOLOX (🔥🔥!!↑) 3.5Mb

这意味着,你可以通过Lite.AI.ToolKit中的同一个类,根据你的使用情况,加载任意一个yolov5*.onnxyolox_*.onnx,如 YoloV5, YoloX等.

auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx");  // for server
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx"); 
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx");  
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx");  // for mobile device 
auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx");  // 3.5Mb only !
🔑️ 如何从通过Docker Hub下载Model Zoo?
  • Firstly, pull the image from docker hub.
    docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 # (11G)
    docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08 # (9G)
    docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08 # (11G)
    docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08 # (28G)
  • Secondly, run the container with local share dir using docker run -idt xxx. A minimum example will show you as follows.
    • make a share dir in your local device.
    mkdir share # any name is ok.
    • write run_mnn_docker_hub.sh script like:
    #!/bin/bash  
    PORT1=6072
    PORT2=6084
    SERVICE_DIR=/Users/xxx/Desktop/your-path-to/share
    CONRAINER_DIR=/home/hub/share
    CONRAINER_NAME=mnn_docker_hub_d
    
    docker run -idt -p ${PORT2}:${PORT1} -v ${SERVICE_DIR}:${CONRAINER_DIR} --shm-size=16gb --name ${CONRAINER_NAME} qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08
    
  • Finally, copy the model weights from /home/hub/mnn/cv to your local share dir.
    # activate mnn docker.
    sh ./run_mnn_docker_hub.sh
    docker exec -it mnn_docker_hub_d /bin/bash
    # copy the models to the share dir.
    cd /home/hub 
    cp -rf mnn/cv share/

3. 应用案例

更多的应用案例详见examples 。点击 ▶️ 可以看到该主题下更多的案例。

案例0: 使用YOLOv5 进行目标检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolov5;
}

输出的结果是:

或者你可以使用最新的 🔥🔥 ! YOLO 系列检测器YOLOXYoloR ,它们会获得接近的结果。

更多可用的通用目标检测器(80类、COCO):

auto *detector = new lite::cv::detection::YoloX(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path); 
auto *detector = new lite::cv::detection::YoloV3(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); 
auto *detector = new lite::cv::detection::SSD(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5(onnx_path); 
auto *detector = new lite::cv::detection::YoloR(onnx_path);  // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); 
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDet(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); 
auto *detector = new lite::cv::detection::YOLOP(onnx_path);
auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::NanoDetPlus(onnx_path); // Super fast and tiny! 2021/12/25
auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny!

案例1: 使用RobustVideoMatting2021🔥🔥🔥 进行视频抠图。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  rvm->detect_video(video_path, output_path, contents, false, 0.4f);
  
  delete rvm;
}

输出的结果是:


更多可用的抠图模型(图片抠图、视频抠图、trimap/mask-free、trimap/mask-based):

auto *matting = new lite::cv::matting::RobustVideoMatting:(onnx_path);  //  WACV 2022.
auto *matting = new lite::cv::matting::MGMatting(onnx_path); // CVPR 2021

案例2: 使用FaceLandmarks1000 进行人脸1000关键点检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

输出的结果是:

更多可用的人脸关键点检测器(68点、98点、106点、1000点):

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks, 1.0Mb only!
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks, 4.8Mb only!
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks, 2.8Mb only!
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks, 9.4Mb only!
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks, 11Mb only!
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks, 2.0Mb only!

案例3: 使用colorization 进行图像着色。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

输出的结果是:


更多可用的着色器模型(灰度图转彩色图):

auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);

案例4: 使用ArcFace 进行人脸识别。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

输出的结果是:

Detected Sim01: 0.721159 Sim02: -0.0626267

更多可用的人脸识别模型(人脸特征提取):

auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !

案例5: 使用SCRFD 2021 进行人脸检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/scrfd_2.5g_bnkps_shape640x640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_detector.jpg";
  std::string save_img_path = "../../../logs/test_lite_scrfd.jpg";
  
  auto *scrfd = new lite::cv::face::detect::SCRFD(onnx_path);
  
  std::vector<lite::types::BoxfWithLandmarks> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  scrfd->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);
  
  std::cout << "Default Version Done! Detected Face Num: " << detected_boxes.size() << std::endl;
  
  delete scrfd;
}

输出的结果是:

更多可用的人脸检测器(轻量级人脸检测器):

auto *detector = new lite::face::detect::UltraFace(onnx_path);  // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path);  // 3.8Mb only ! 
auto *detector = new lite::face::detect::RetinaFace(onnx_path);  // 1.6Mb only ! CVPR2020
auto *detector = new lite::face::detect::SCRFD(onnx_path);  // 2.5Mb only ! CVPR2021, Super fast and accurate!!
auto *detector = new lite::face::detect::YOLO5Face(onnx_path);  // 2021, Super fast and accurate!!

案例6: 使用 DeepLabV3ResNet101 进行语义分割. 请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
  std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg";

  auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads

  lite::types::SegmentContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  deeplabv3_resnet101->detect(img_bgr, content);

  if (content.flag)
  {
    cv::Mat out_img;
    cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
    cv::imwrite(save_img_path, out_img);
    if (!content.names_map.empty())
    {
      for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
      {
        std::cout << it->first << " Name: " << it->second << std::endl;
      }
    }
  }
  delete deeplabv3_resnet101;
}

输出的结果是:

更多可用的语义分割模型(人像分割、实例分割):

auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
auto *segment = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path);

案例7: 使用 SSRNet 进行年龄估计. 请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
  std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg";

  lite::cv::face::attr::SSRNet *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);

  lite::types::Age age;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ssrnet->detect(img_bgr, age);
  lite::utils::draw_age_inplace(img_bgr, age);
  cv::imwrite(save_img_path, img_bgr);
  std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;

  delete ssrnet;
}

输出的结果是:

更多可用的人脸属性识别模型(性别、年龄、情绪):

auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);  
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); 
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions, 13Mb only!
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::SSRNet(onnx_path); // age estimation, 190kb only!!!

案例8: 使用 DenseNet 进行图片1000分类. 请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";

  auto *densenet = new lite::cv::classification::DenseNet(onnx_path);

  lite::types::ImageNetContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  densenet->detect(img_bgr, content);
  if (content.flag)
  {
    const unsigned int top_k = content.scores.size();
    if (top_k > 0)
    {
      for (unsigned int i = 0; i < top_k; ++i)
        std::cout << i + 1
                  << ": " << content.labels.at(i)
                  << ": " << content.texts.at(i)
                  << ": " << content.scores.at(i)
                  << std::endl;
    }
  }
  delete densenet;
}

输出的结果是:

更多可用的图像分类模型(1000类):

auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);  
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); // 8.7Mb only!
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); // 13Mb only!
auto *classifier = new lite::cv::classification::ResNet(onnx_path); 
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);

案例9: 使用 FSANet 进行头部姿态识别. 请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
  std::string save_img_path = "../../../logs/test_lite_fsanet.jpg";

  auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::EulerAngles euler_angles;
  fsanet->detect(img_bgr, euler_angles);
  
  if (euler_angles.flag)
  {
    lite::utils::draw_axis_inplace(img_bgr, euler_angles);
    cv::imwrite(save_img_path, img_bgr);
    std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
  }
  delete fsanet;
}

输出的结果是:

更多可用的头部姿态识别模型(欧拉角、yaw、pitch、roll):

auto *pose = new lite::cv::face::pose::FSANet(onnx_path); // 1.2Mb only!

案例10: 使用 FastStyleTransfer 进行风格迁移. 请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
  std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg";
  
  auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
 
  lite::types::StyleContent style_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  fast_style_transfer->detect(img_bgr, style_content);

  if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
  delete fast_style_transfer;
}

输出的结果是:


更多可用的风格迁移模型(自然风格迁移、其他):

auto *transfer = new lite::cv::style::FastStyleTransfer(onnx_path); // 6.4Mb only

4. 开源协议

Lite.AI.ToolKit 的代码采用GPL-3.0协议。

5. 引用参考

本项目参考了以下开源项目。

展开更多引用参考

6. 编译选项

未来会增加一些模型的MNNNCNNTNN 支持,但由于算子兼容等原因,也无法确保所有被ONNXRuntime C++ 支持的模型能够在MNNNCNNTNN 下跑通。所以,如果您想使用本项目支持的所有模型,并且不在意1~2ms的性能差距的话,请使用ONNXRuntime版本的实现。ONNXRuntime 是本仓库默认的推理引擎。但是如果你确实希望编译支持MNNNCNNTNN 支持的Lite.AI.ToolKit🍅🍅动态库,你可以按照以下的步骤进行设置。

  • build.sh中添加DENABLE_MNN=ONDENABLE_NCNN=ONDENABLE_TNN=ON,比如
cd build && cmake \
  -DCMAKE_BUILD_TYPE=MinSizeRel \
  -DINCLUDE_OPENCV=ON \   # 是否打包OpenCV进lite.ai.toolkit,默认ON;否则,你需要单独设置OpenCV
  -DENABLE_MNN=ON \       # 是否编译MNN版本的模型, 默认OFF,目前只支持部分模型
  -DENABLE_NCNN=OFF \     # 是否编译NCNN版本的模型,默认OFF,目前只支持部分模型
  -DENABLE_TNN=OFF \      # 是否编译TNN版本的模型, 默认OFF,目前只支持部分模型
  .. && make -j8
  • 使用MNN、NCNN或TNN版本的接口,详见案例demo ,比如
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path);
auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path);
auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);

7. 引用

如果您在自己的项目中使用了Lite.AI.ToolKit,可考虑按以下方式进行引用。

@misc{lite.ai.toolkit2021,
  title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai.toolkit},
  note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
  author={Yan Jun},
  year={2021}
}

8. 示例工程

Project Describe Operation System Stars Status
RobustVideoMatting.lite.ai.toolkit Video/Image Matting MacOS
nanodet.lite.ai.toolkit Object Detection MacOS
YOLOX.lite.ai.toolkit Object Detection MacOS
YOLOP.lite.ai.toolkit Panoptic Perception MacOS
scrfd.lite.ai.toolkit Face Detection MacOS
YOLO5Face.lite.ai.toolkit Face Detection MacOS
MGMatting.lite.ai.toolkit Image Matting MacOS
fsanet.lite.ai.toolkit Head Pose Estimation MacOS
ssrnet.lite.ai.toolkit Age Estimation MacOS

若是有用,❤️不妨给个⭐️🌟支持一下吧,感谢支持~