Skip to content

Commit

Permalink
Merge branch 'master' into feature/egraph-extract-constrains
Browse files Browse the repository at this point in the history
  • Loading branch information
xhuohai committed Mar 11, 2024
2 parents b278302 + 1cdea27 commit c10bd9b
Show file tree
Hide file tree
Showing 5 changed files with 372 additions and 79 deletions.
172 changes: 125 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,75 +2,99 @@
<img src="docs/logo.png" width="400" alt="nncase" />
</div>

[![GitHub repository](https://img.shields.io/badge/github-repository-blue?logo=github&style=plastic)](https://github.com/kendryte/nncase)
[![Gitee repository](https://img.shields.io/badge/gitee-repository-blue?logo=gitee&style=plastic)](https://gitee.com/kendryte/nncase)
[![GitHub release](https://img.shields.io/github/v/release/kendryte/nncase?color=brightgreen&display_name=tag&logo=github&style=plastic)](https://github.com/kendryte/nncase/releases)
[![GitHub repository](https://img.shields.io/badge/github-repository-blue?logo=github&style=plastic)](https://github.com/kendryte/nncase) [![Gitee repository](https://img.shields.io/badge/gitee-repository-blue?logo=gitee&style=plastic)](https://gitee.com/kendryte/nncase) [![GitHub release](https://img.shields.io/github/v/release/kendryte/nncase?color=brightgreen&display_name=tag&logo=github&style=plastic)](https://github.com/kendryte/nncase/releases)

[切换中文](docs/readme_ZH.md)

`nncase` is a neural network compiler for AI accelerators.

`nncase` 是一个为 AI 加速器设计的神经网络编译器。
Telegram: [nncase community](https://t.me/joinchat/PPcEPZMLaTViNDI1)
Technical Discussion QQ Group: 790699378 . Answer: 人工智能

技术交流 QQ 群:790699378
[TOC]

Telegram: [nncase community](https://t.me/joinchat/PPcEPZMLaTViNDI1)
---

## Install from binaries
## K230

## 从二进制安装
- [Usage](./docs/USAGE_v2_EN.md)
- [FAQ](./docs/FAQ_EN.md)
- [Example](./examples/user_guide/k230_simulate-EN.ipynb)
- [Colab run](https://colab.research.google.com/drive/1m8TTree096m5VHmq-Uc60gXyltVCgnRb?usp=sharing)
- [ *Version relationship between `nncase` and `K230_SDK`* ](https://developer.canaan-creative.com/k230/dev/zh/03_other/K230_SDK_%E7%89%88%E6%9C%AC%E8%AF%B4%E6%98%8E.html#ai-sdkcanmvnncase)

Download prebuilt binaries from [Release](https://github.com/kendryte/nncase/releases).

下载预编译的二进制文件 [Release](https://github.com/kendryte/nncase/releases)
### Install

## Build from source
- Linux:

## 从源码编译
```shell
pip install nncase nncase-kpu
```

[Build from source](./docs/build.md)
- Windows:

## Supported operators
```shell
1. pip install nncase
2. Download `nncase_kpu-2.x.x-py2.py3-none-win_amd64.whl` in below link.
3. pip install nncase_kpu-2.x.x-py2.py3-none-win_amd64.whl
```

## 支持的算子
All version of `nncase` and `nncase-kpu` in [Release](https://github.com/kendryte/nncase/releases).

### Supported operators

- [TFLite ops](./docs/tflite_ops.md)
- [Caffe ops](./docs/caffe_ops.md)
- [ONNX ops](./docs/onnx_ops.md)


### benchmark test

<table>
<tr> <th>kind</th> <th> model </th><th> shape </th><th> quant_type(If/W) </th><th> nncase_fps </th><th> tflite_onnx_result </th><th> accuracy </th><th> info </th></tr>
<tr>
<td rowspan='3'>Image Classification</td>
<td>mobilenetv2 </td><td> [1,224,224,3] </td><td> u8/u8 </td><td> 600.24 </td><td> top-1 = 71.3%<br/>top-5 = 90.1% </td><td> top-1 = 71.1%<br/>top-5 = 90.0% </td><td> dataset(ImageNet 2012, 50000 images)<br/> tflite </td></tr>
<tr><td>resnet50V2 </td><td> [1,3,224,224] </td><td> u8/u8 </td><td> 86.17 </td><td> top-1 = 75.44%<br/>top-5 = 92.56% </td><td> top-1 = 75.11% <br/> top-5 = 92.36% </td><td> dataset(ImageNet 2012, 50000 images)<br/> onnx</td></tr>
<tr><td>yolov8s_cls </td><td> [1,3,224,224] </td><td> u8/u8 </td><td> 130.497 </td><td> top-1 = 72.2%<br/>top-5 = 90.9% </td><td> top-1 = 72.2%<br/>top-5 = 90.8% </td><td> dataset(ImageNet 2012, 50000 images)<br/> yolov8s_cls(v8.0.207)</td></tr>
<tr>
<td rowspan='2'>Object Detection</td>
<td>yolov5s_det </td><td> [1,3,640,640] </td><td> u8/u8 </td><td> 23.645 </td><td> bbox<br/>mAP50-90 = 0.374<br/>mAP50 = 0.567 </td><td> bbox<br/>mAP50-90 = 0.369<br/>mAP50 = 0.566</td><td>dataset(coco val2017, 5000 images)<br/>yolov5s_det(v7.0 tag, rect=False, conf=0.001, iou=0.65)</td></tr>
<tr><td>yolov8s_det </td><td> [1,3,640,640] </td><td> u8/u8 </td><td> 9.373 </td><td> bbox<br/>mAP50-90 = 0.446<br/>mAP50 = 0.612<br/>mAP75 = 0.484 </td><td> bbox<br/>mAP50-90 = 0.404<br/>mAP50 = 0.593<br/>mAP75 = 0.45</td><td>dataset(coco val2017, 5000 images)<br/>yolov8s_det(v8.0.207, rect = False)</td></tr>
<tr>
<td rowspan='1'>Image Segmentation</td>
<td>yolov8s_seg </td><td> [1,3,640,640] </td><td> u8/u8 </td><td> 7.845 </td><td> bbox<br/>mAP50-90 = 0.444<br/>mAP50 = 0.606<br/>mAP75 = 0.484<br/>segm<br/>mAP50-90 = 0.371<br/>mAP50 = 0.578<br/>mAP75 = 0.396 </td><td> bbox<br/>mAP50-90 = 0.444<br/>mAP50 = 0.606<br/>mAP75 = 0.484<br/>segm<br/>mAP50-90 = 0.371<br/>mAP50 = 0.579<br/>mAP75 = 0.397</td><td> dataset(coco val2017, 5000 images)<br/>yolov8s_seg(v8.0.207, rect = False, conf_thres = 0.0008)</td></tr>
<tr>
<td rowspan='3'>Pose Estimation</td>
<td>yolov8n_pose_320 </td><td> [1,3,320,320] </td><td> u8/u8 </td><td> 36.066 </td><td> bbox<br/>mAP50-90 = 0.6<br/>mAP50 = 0.843<br/>mAP75 = 0.654<br/>keypoints<br/>mAP50-90 = 0.358<br/>mAP50 = 0.646<br/>mAP75 = 0.353 </td><td> bbox<br/>mAP50-90 = 0.6<br/>mAP50 = 0.841<br/>mAP75 = 0.656<br/>keypoints<br/>mAP50-90 = 0.359<br/>mAP50 = 0.648<br/>mAP75 = 0.357 </td><td> dataset(coco val2017, 2346 images)<br/>yolov8n_pose(v8.0.207, rect = False)</td></tr>
<tr><td>yolov8n_pose_640 </td><td> [1,3,640,640] </td><td> u8/u8 </td><td> 10.88 </td><td> bbox<br/>mAP50-90 = 0.694<br/>mAP50 = 0.909<br/>mAP75 = 0.776<br/>keypoints<br/>mAP50-90 = 0.509<br/>mAP50 = 0.798<br/>mAP75 = 0.544 </td><td> bbox<br/>mAP50-90 = 0.694<br/>mAP50 = 0.909<br/>mAP75 = 0.777<br/>keypoints<br/>mAP50-90 = 0.508<br/>mAP50 = 0.798<br/>mAP75 = 0.54 </td><td> dataset(coco val2017, 2346 images)<br/>yolov8n_pose(v8.0.207, rect = False)</td></tr>
<tr><td>yolov8s_pose </td><td> [1,3,640,640] </td><td> u8/u8 </td><td> 5.568 </td><td> bbox<br/>mAP50-90 = 0.733<br/>mAP50 = 0.925<br/>mAP75 = 0.818<br/>keypoints<br/>mAP50-90 = 0.605<br/>mAP50 = 0.857<br/>mAP75 = 0.666 </td><td> bbox<br/>mAP50-90 = 0.734<br/>mAP50 = 0.925<br/>mAP75 = 0.819<br/>keypoints<br/>mAP50-90 = 0.604<br/>mAP50 = 0.859<br/>mAP75 = 0.669</td><td> dataset(coco val2017, 2346 images)<br/>yolov8s_pose(v8.0.207, rect = False)</td></tr>
</table>


### Demo

|[eye gaze](https://developer.canaan-creative.com/devAdmin/model/download?mid=be978f1f38b8aa2f2b649185a10c2e9c&filePath=/upload/model/official/k230/yolop_lane_seg/yolop_lane_seg.zip) | [space_resize](https://developer.canaan-creative.com/devAdmin/model/download?mid=7d48cb68a499dd54daf0ced14549b142&filePath=/upload/model/official/k230/space_resize/space_resize.zip) | [face pose](https://developer.canaan-creative.com/devAdmin/model/download?mid=5b87c02b969a9e60d48b08e357c20e31&filePath=/upload/model/official/k230/face_pose/face_pose.zip) |
|---|---|---|
|<img src="https://github.com/kendryte/nncase_docs/blob/master/gif/eye_gaze_result.gif?raw=true" alt="gif"> | <img src="https://github.com/kendryte/nncase_docs/blob/master/gif/space_resize.gif?raw=true" alt="gif">| <img src="https://github.com/kendryte/nncase_docs/blob/master/gif/face_pose_result.gif?raw=true">|

---

## K210/K510

- [Usage](https://github.com/kendryte/nncase/blob/release/1.0/docs/USAGE_EN.md)
- [FAQ](https://github.com/kendryte/nncase/blob/release/1.0/docs/FAQ_EN.md)
- [使用说明](https://github.com/kendryte/nncase/blob/release/1.0/docs/USAGE_ZH.md)
- [常见问题](https://github.com/kendryte/nncase/blob/release/1.0/docs/FAQ_ZH.md)
- [Example](https://github.com/kendryte/nncase/blob/release/1.0/examples/user_guide/)

## K230

- [Usage](./docs/USAGE_v2_EN.md)
- [FAQ](./docs/FAQ_EN.md)
- [Example](./examples/user_guide/k230_simulate-EN.ipynb)
- [使用说明](./docs/USAGE_v2.md)
- [常见问题](./docs/FAQ_ZH.md)
- [示例](./examples/user_guide/k230_simulate-ZH.ipynb)
- [Colab run](https://colab.research.google.com/drive/1m8TTree096m5VHmq-Uc60gXyltVCgnRb?usp=sharing)

## Resources
### Supported operators

## 资源
### K210
- [K210_Yolo_framework](https://github.com/zhen8838/K210_Yolo_framework)
- [Shts!'s Blog (Japanese)](https://www.shtsno24.tokyo/2020/03/nncase-v020.html)
- [TFLite ops](https://github.com/kendryte/nncase/blob/release/1.0/docs/tflite_ops.md)
- [Caffe ops](https://github.com/kendryte/nncase/blob/release/1.0/docs/caffe_ops.md)
- [ONNX ops](https://github.com/kendryte/nncase/blob/release/1.0/docs/onnx_ops.md)

---

## Architecture

## 架构

<div align="center">
<img src="docs/arch.png" alt="nncase arch" />
</div>

## Features

- Supports multiple inputs and outputs and multi-branch structure
Expand All @@ -80,11 +104,65 @@ Download prebuilt binaries from [Release](https://github.com/kendryte/nncase/rel
- Support post quantization from float model with calibration dataset
- Flat model with zero copy loading

## 功能
---

## Architecture

<div align="center">
<img src="docs/imgs/arch.jpeg" alt="nncase arch" />
</div>

---

## Build from source

**It is recommended to install nncase directly through `pip`. At present, the source code related to k510 and K230 chips is not open source, so it is not possible to use `nncase-K510` and `nncase-kpu` (K230) directly by compiling source code.**


If there are operators in your model that `nncase` does not yet support, you can request them in the issue or implement them yourself and submit the PR. Later versions will be integrated, or contact us to provide a temporary version.
Here are the steps to compile `nncase`.

```shell
git clone https://github.com/kendryte/nncase.git
cd nncase
mkdir build && cd build

- 支持多输入输出网络,支持多分支结构
- 静态内存分配,不需要堆内存
- 算子合并和优化
- 支持 float 和量化 uint8 推理
- 支持训练后量化,使用浮点模型和量化校准集
- 平坦模型,支持零拷贝加载
# Use Ninja
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=./install
ninja && ninja install

# Use make
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=./install
make && make install
```

---

## Resources

### Kendryte developer community

[Kendryte developer community](https://developer.canaan-creative.com/resource) contains all resources related to K210, K510, and K230.
- 资料下载 --> Pre-compiled images available for the development boards corresponding to the three chips.
- 文档 --> Documents corresponding to the three chips.
- 模型库 --> Examples and code for industrial, security, educational and other scenarios that can be run on the K210 and K230.
- 模型训练 --> The model training platform for K210 and K230 supports the training of various scenarios.

### Bilibili
- [Kendryte AI tutorial and application demonstration](https://space.bilibili.com/677429436)

### K210 related repo

- [K210_Yolo_framework](https://github.com/zhen8838/K210_Yolo_framework)
- [Shts!&#39;s Blog (Japanese)](https://www.shtsno24.tokyo/2020/03/nncase-v020.html)
- [Examples](https://github.com/kendryte/canmv_examples/tree/main/01-K210)

### K230 related repo

- C: [K230_SDK](https://github.com/kendryte/k230_sdk)
- [Documents](https://github.com/kendryte/k230_docs)
- [K230 end-to-end tutorial](https://github.com/kendryte/K230_training_scripts)
- MicroPython: [Canmv_k230](https://github.com/kendryte/k230_canmv)
- [Documents](https://github.com/kendryte/k230_canmv_docs)
- [Examples](https://github.com/kendryte/canmv_examples/tree/main/02-K230)
---
59 changes: 42 additions & 17 deletions docs/FAQ_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,41 +4,66 @@

## 1. Error installing `whl` package

### 1.1 Q: `xxx.whl is not a supported wheel on this platform`
### 1.1 `xxx.whl is not a supported wheel on this platform`

A: Upgrade pip >= 20.3 using `pip install --upgrade pip`
A: Upgrade pip >= 20.3.

```shell
pip install --upgrade pip
```

---

## 2. Compile-time errors

### 2.1 "System.NotSupportedException"

#### 2.1.1 Q: Compile model reported error "System.NotSupportedException: Not Supported *** op: XXX"
### 2.1 Compile model reported error "System.NotSupportedException: Not Supported *** op: XXX"

A: This exception indicates that there are operators, `XXX`, that are not yet supported. You can create a issue in [nncase Github Issue](https://github.com/kendryte/nncase/issues). In the current directory `***_ops.md`, you can view the operators already supported in each inference framework.

If 'XXX' belongs to quantization-related operators such as `FAKE_QUANT`, `DEQUANTIZE`, `QUANTIZE`, it indicates that the current model is a quantized model, and 'nncase' does not currently support such models, please compile `kmodel` using a floating point model.

### 2.2 "System.IO.IOException"

#### 2.2.1 Q: Downloading the `nncase` repository and compiling it yourself and running test gives this error, `The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached`.
### 2.2 "The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached."

A: Use `sudo gedit /proc/sys/fs/inotify/max_user_instances` to change 128 to a larger value.

### 2.3 `initialize` error
### 2.3 `RuntimeError: Failed to initialize hostfxr`

A:Need to install dotnet-sdk-7.0.
- Linux:

```shell
sudo apt-get update
sudo apt-get install dotnet-sdk-7.0
```

#### 2.3.1 Q:"RuntimeError: Failed to initialize hostfxr" appears when compiling the kmodel.
- Windows: Refer to MicroSoft official website.

A1:Need to install dotnet-7.0.
### 2.4 "KeyNotFoundException: The given key 'K230' was not present in the dictionary"

A: Need to install `nncase-kpu`.
- Linux: `pip install nncase-kpu`
- Windows: Sorry for that you need to download the `whl` package in [nncase github repo](https://github.com/kendryte/nncase/tags) and install it manually.

> Before install `nncase`, please make sure that the version of `nncase` is consistent with the version of `nncase-kpu`.
```shell
> pip show nncase | grep "Version:"
Version: 2.8.0
(Linux) > pip install nncase-kpu==2.8.0
(Windows)> pip install nncase_kpu-2.8.0-py2.py3-none-win_amd64.whl
```

---

## 3. Runtime errors

### 3.1 Q: Compiling `kmodel` is fine, but when inferring, the error `nncase.simulator.k230.sc: not found`occurs.
### 3.1 When inferring, the error `nncase.simulator.k230.sc: not found` occurs.

Or these situations:
- `"nncase.simulator.k230.sc: Permision denied."`
- `"Input/output error."`

A: First, make sure that the path of the nncase installation is added to the PATH environment variable. You need to check whether the versions of `nncase` and `nncase-kpu` are the same.
A: Make sure that the path of the nncase installation is added to the `PATH` environment variable. You need to check whether the versions of `nncase` and `nncase-kpu` are the same.

```shell
root@a52f1cacf581:/mnt# pip list | grep nncase
Expand All @@ -52,13 +77,13 @@ If inconsistent, install the same version of the Python package `pip install nnc

## 4. Runtime error on k230 development board

### 4.1 Q: `data.size_bytes() == size = false (bool)`
### 4.1 `data.size_bytes() == size = false (bool)`

A: The above situation is usually caused by an error in the input data file of the app inference, which does not match the model input shape or the model input type. Especially when pre-processing is configured, you need to check ` input_shape` and `input_type ` of input data, after adding pre-processing operation, relevant nodes are added to the model, and the input node will also be changed. If `input_shape `, `input_type `are different from the original model, the newly configured `shape `, `type` should be used to generate input data.

### 4.2 Q: `std::bad_alloc`
### 4.2 `std::bad_alloc`

A: Usually it is caused by memory allocation failure, you can do the following troubleshooting.

- Check whether the generated `kmodel` exceeds the current available memory.
- Check whether the generated `kmodel` exceeds the current available system memory.
- Check whether the generated `kmodel` exceeds the currently available system memory.
- Check App for memory leaks.
Loading

0 comments on commit c10bd9b

Please sign in to comment.