Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve README format: new line for QR code #2

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
18 changes: 16 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,42 @@
![MNN](resource/banner.png)

[中文版本](README_CN.md)
📖 English README | [📖 中文README](README_CN.md)

## Intro

MNN is a lightweight deep neural network inference engine. It loads models and do inference on devices. At present, MNN has been integrated in more than 20 apps of Alibaba-inc, such as Taobao, Tmall, Youku and etc., covering live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity distribution, security risk control and other scenarios. In addition, MNN is also used on embedded devices, such as IoT.

## Features

### Lightweight

- Optimized for devices, no dependencies, can be easily deployed to mobile devices and a variety of embedded devices.
- iOS platform: static library size for armv7+arm64 platforms is about 5MB, size increase of linked executables is about 620KB, and metallib file is about 600KB.
- Android platform: core so size is about 400KB, OpenCL so is about 400KB, Vulkan so is about 400KB.

### Versatility

- Supports `Tensorflow`, `Caffe`, `ONNX`, and supports common neural networks such as `CNN`, `RNN`, `GAN`.
- Supports 86 `Tensorflow` ops, 34 `Caffe` ops; MNN ops: 71 for CPU, 55 for Metal, 29 for OpenCL, and 31 for Vulkan.
- Supports iOS 8.0+, Android 4.3+ and embedded devices with POSIX interface.
- Supports hybrid computing on multiple devices. Currently supports CPU and GPU. GPU op plugin can be loaded dynamically to replace default (CPU) op implementation.

### High performance

- Implements core computing with lots of optimized assembly code to make full use of the ARM CPU.
- For iOS, GPU acceleration (Metal) can be turned on, which is faster than Apple's native CoreML.
- For Android, `OpenCL`, `Vulkan`, and `OpenGL` are available and deep tuned for mainstream GPUs (`Adreno` and `Mali`).
- Convolution and transposition convolution algorithms are efficient and stable. The Winograd convolution algorithm is widely used to better symmetric convolutions such as 3x3 -> 7x7.
- Additional optimizations for the new architecture ARM v8.2 with half-precision calculation support.

### Easy to use

- Efficient image processing module, speeding up affine transform and color space transform without libyuv or opencv.
- Provides callbacks throughout the workflow to extract data or control the execution precisely.
- Provides options for selecting inference branch and paralleling branches on CPU and GPU.

## Architecture

![architecture](doc/architecture.png)

MNN can be divided into two parts: Converter and Interpreter.
Expand All @@ -39,6 +46,7 @@ Converter consists of Frontends and Graph Optimize. The former is responsible fo
Interpreter consists of Engine and Backends. The former is responsible for the loading of the model and the scheduling of the calculation graph; the latter includes the memory allocation and the Op implementation under each computing device. In Engine and Backends, MNN applies a variety of optimization schemes, including applying Winograd algorithm in convolution and deconvolution, applying Strassen algorithm in matrix multiplication, low-precision calculation, Neon optimization, hand-written assembly, multi-thread optimization, memory reuse, heterogeneous computing, etc.

## Quick start

- [Install](doc/Install_EN.md)
- [Tutorial](doc/Tutorial_EN.md)
- [API](doc/API/API_index.html)
Expand All @@ -49,25 +57,31 @@ Interpreter consists of Engine and Backends. The former is responsible for the l
- [Contributing](doc/Contributing_EN.md)

## Benchmark

- [Benchmark](doc/Benchmark_EN.md)

## How to customize

- [Add custom op](doc/AddOp_EN.md)
- [Add custom backend](doc/AddBackend_EN.md)

## Feedbacks

- [FAQ](doc/FAQ.md)

Scan QR code to join DingDing discussion group.
Scan QR code to join DingDing discussion group.
![DingDing Group](doc/QRCodeDingDing.png)

## License

Apache 2.0

## Acknowledgement

MNN participants: Taobao Technology Department, Search Engineering Team, DAMO Team, Youku and other group employees.

MNN refers to the following projects:

- [Caffe](https://github.com/BVLC/caffe)
- [flatbuffer](https://github.com/google/flatbuffers)
- [gemmlowp](https://github.com/google/gemmlowp)
Expand Down
18 changes: 16 additions & 2 deletions README_CN.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,42 @@
![MNN](resource/banner.png)

[English Version](README.md)
📖 中文README | [📖 English README](README.md)

## 简介

MNN是一个轻量级的深度神经网络推理引擎,在端侧加载深度神经网络模型进行推理预测。目前,MNN已经在阿里巴巴的手机淘宝、手机天猫、优酷等20多个App中使用,覆盖直播、短视频、搜索推荐、商品图像搜索、互动营销、权益发放、安全风控等场景。此外,IoT等场景下也有若干应用。

## 整体特点

### 轻量性

- 针对端侧设备特点深度定制和裁剪,无任何依赖,可以方便地部署到移动设备和各种嵌入式设备中。
- iOS平台:armv7+arm64静态库大小5MB左右,链接生成可执行文件增加大小620KB左右,metallib文件600KB左右。
- Android平台:so大小400KB左右,OpenCL库400KB左右,Vulkan库400KB左右。

### 通用性

- 支持`Tensorflow`、`Caffe`、`ONNX`等主流模型文件格式,支持`CNN`、`RNN`、`GAN`等常用网络。
- 支持86个`Tensorflow`Op、34个`Caffe`Op;各计算设备支持的MNN Op数:CPU 71个,Metal 55个,OpenCL 29个,Vulkan 31个。
- 支持iOS 8.0+、Android 4.3+和具有POSIX接口的嵌入式设备。
- 支持异构设备混合计算,目前支持CPU和GPU,可以动态导入GPU Op插件,替代CPU Op的实现。

### 高性能

- 不依赖任何第三方计算库,依靠大量手写汇编实现核心运算,充分发挥ARM CPU的算力。
- iOS设备上可以开启GPU加速(Metal),常用模型上快于苹果原生的CoreML。
- Android上提供了`OpenCL`、`Vulkan`、`OpenGL`三套方案,尽可能多地满足设备需求,针对主流GPU(`Adreno`和`Mali`)做了深度调优。
- 卷积、转置卷积算法高效稳定,对于任意形状的卷积均能高效运行,广泛运用了 Winograd 卷积算法,对3x3 -> 7x7之类的对称卷积有高效的实现。
- 针对ARM v8.2的新架构额外作了优化,新设备可利用半精度计算的特性进一步提速。

### 易用性

- 有高效的图像处理模块,覆盖常见的形变、转换等需求,一般情况下,无需额外引入libyuv或opencv库处理图像。
- 支持回调机制,可以在网络运行中插入回调,提取数据或者控制运行走向。
- 支持只运行网络中的一部分,或者指定CPU和GPU间并行运行。

## 架构设计

![architecture](doc/architecture.png)

MNN可以分为Converter和Interpreter两部分。
Expand All @@ -39,6 +46,7 @@ Converter由Frontends和Graph Optimize构成。前者负责支持不同的训练
Interpreter由Engine和Backends构成。前者负责模型的加载、计算图的调度;后者包含各计算设备下的内存分配、Op实现。在Engine和Backends中,MNN应用了多种优化方案,包括在卷积和反卷积中应用Winograd算法、在矩阵乘法中应用Strassen算法、低精度计算、Neon优化、手写汇编、多线程优化、内存复用、异构计算等。

## 开始使用

- [编译与安装](doc/Install_CN.md)
- [使用教程](doc/Tutorial_CN.md)
- [API文档](doc/API/API_index.html)
Expand All @@ -49,25 +57,31 @@ Interpreter由Engine和Backends构成。前者负责模型的加载、计算图
- [贡献代码](doc/Contributing_CN.md)

## 性能评测

- [性能测试结果](doc/Benchmark_CN.md)

## 如何扩展

- [添加自定义Op](doc/AddOp_CN.md)
- [添加自定义Backend](doc/AddBackend_CN.md)

## 交流与反馈

- [常见问题](doc/FAQ.md)

扫描二维码加入钉钉讨论群。
扫描二维码加入钉钉讨论群。
![钉钉群](doc/QRCodeDingDing.png)

## License

Apache 2.0

## 致谢

MNN参与人员:淘宝技术部、搜索工程团队、达摩院团队、优酷等集团员工。

MNN参考、借鉴了下列项目:

- [Caffe](https://github.com/BVLC/caffe)
- [flatbuffer](https://github.com/google/flatbuffers)
- [gemmlowp](https://github.com/google/gemmlowp)
Expand Down