Skip to content

Commit

Permalink
[Docs] add sdk's build instructions (open-mmlab#324)
Browse files Browse the repository at this point in the history
* add sdk's build instructions

* update according to review comments
  • Loading branch information
lvhan028 committed Dec 23, 2021
1 parent 98da5e8 commit cdd0cf0
Showing 1 changed file with 134 additions and 3 deletions.
137 changes: 134 additions & 3 deletions docs/en/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,13 +31,24 @@
Install cmake>=3.14.0, you could refer to [cmake website](https://cmake.org/install) for more detailed info.

```bash
apt-get install -y libssl-dev
sudo apt-get install -y libssl-dev
wget https://github.com/Kitware/CMake/releases/download/v3.20.0/cmake-3.20.0.tar.gz
tar -zxvf cmake-3.20.0.tar.gz
cd cmake-3.20.0
./bootstrap
make
make install
sudo make install
```

- GCC 7+

MMDeploy requires compilers that support C++17.
```bash
# Add repository if ubuntu < 18.04
sudo add-apt-repository ppa:ubuntu-toolchain-r/test

sudo apt-get install gcc-7
sudo apt-get install g++-7
```

### Build backend support
Expand All @@ -47,7 +58,7 @@ Build the inference engine extension libraries you need.
- [ONNX Runtime](backends/onnxruntime.md)
- [TensorRT](backends/tensorrt.md)
- [ncnn](backends/ncnn.md)
- [PPLNN](backends/pplnn.md)
- [pplnn](backends/pplnn.md)
- [OpenVINO](backends/openvino.md)

### Install mmdeploy
Expand All @@ -59,3 +70,123 @@ pip install -e .
Some dependencies are optional. Simply running `pip install -e .` will only install the minimum runtime requirements.
To use optional dependencies install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -e .[optional]`).
Valid keys for the extras field are: `all`, `tests`, `build`, `optional`.

### Build SDK

Readers can skip this chapter if you are only interested in model converter.

#### Dependencies

Currently, SDK is tested on Linux x86-64, more platforms will be added in the future. The following packages are required to build MMDeploy SDK.

Each package's installation command is given based on Ubuntu 18.04.

- OpenCV 3+

```bash
sudo apt-get install libopencv-dev
```

- spdlog 0.16+

``` bash
sudo apt-get install libspdlog-dev
```

On Ubuntu 16.04, please use the following command
```bash
wget http://archive.ubuntu.com/ubuntu/pool/universe/s/spdlog/libspdlog-dev_0.16.3-1_amd64.deb
sudo dpkg -i libspdlog-dev_0.16.3-1_amd64.deb
```

You can also build spdlog from its source to enjoy its latest features. But be sure to add **`-fPIC`** compilation flags at first.

- pplcv

A high-performance image processing library of openPPL supporting x86 and cuda platforms.</br>
It is **OPTIONAL** which only be needed if `cuda` platform is required.
```bash
git clone git@github.com:openppl-public/ppl.cv.git
cd ppl.cv
./build.sh cuda
```

- backend engines

SDK uses the same backends as model converter does. Please follow [build backend](#build-backend-support) guide to install your interested backend.

#### Set Build Option

- Turn on SDK build switch

`-DMMDEPLOY_BUILD_SDK=ON`


- Enabling Devices

By default, only CPU device is included in the target devices. You can enable device support for other devices by
passing a semicolon separated list of device names to `MMDEPLOY_TARGET_DEVICES` variable, e.g. `-DMMDEPLOY_TARGET_DEVICES="cpu;cuda"`. </br>
Currently, the following devices are supported.

| device | name | path setter |
|--------|-------|-------------|
| Host | cpu | N/A |
| CUDA | cuda | CUDA_TOOLKIT_ROOT_DIR & pplcv_DIR |

If you have multiple CUDA versions installed on your system, you will need to pass `CUDA_TOOLKIT_ROOT_DIR` to cmake to specify the version. </br>
Meanwhile, `pplcv_DIR` has to be provided in order to build image processing operators on cuda platform.


- Enabling inference engines

**By default, no target inference engines are set**, since it's highly dependent on the use case.
`MMDEPLOY_TARGET_BACKENDS` must be set to a semicolon separated list of inference engine names,
e.g. `-DMMDEPLOY_TARGET_BACKENDS="trt;ort;pplnn;ncnn;openvino"`
A path to the inference engine library is also needed. The following backends are currently supported

| library | name | path setter |
|-------------|----------|-----------------|
| PPL.nn | pplnn | pplnn_DIR |
| ncnn | ncnn | ncnn_DIR |
| ONNXRuntime | ort | ONNXRUNTIME_DIR |
| TensorRT | trt | TENSORRT_DIR & CUDNN_DIR |
| OpenVINO | openvino | InferenceEngine_DIR |

- Enabling codebase's postprocess components

`MMDEPLOY_CODEBASES` MUST be specified by a semicolon separated list of codebase names.
The currently supported codebases are 'mmcls', 'mmdet', 'mmedit', 'mmseg', 'mmocr'.
Instead of listing them one by one in `MMDEPLOY_CODEBASES`, user can also pass `all` to enable all of them, i.e.,
`-DMMDEPLOY_CODEBASES=all`


- Put it all together

The following is a recipe for building MMDeploy SDK with cpu device and ONNXRuntime support
```Bash
mkdir build && cd build
cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DONNXRUNTIME_DIR=/path/to/onnxruntime \
-DMMDEPLOY_TARGET_DEVICES=cpu \
-DMMDEPLOY_TARGET_BACKENDS=ort \
-DMMDEPLOY_CODEBASES=all
cmake --build . -- -j$(nproc) && cmake --install .
```

Here is another example to build MMDeploy SDK with cuda device and TensorRT backend

```Bash
mkdir build && cd build
cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-Dpplcv_DIR=/path/to/ppl.cv/install/lib/cmake/ppl \
-DTENSORRT_DIR=/path/to/tensorrt \
-DCUDNN_DIR=/path/to/cudnn \
-DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
-DMMDEPLOY_TARGET_BACKENDS=trt \
-DMMDEPLOY_CODEBASES=all
cmake --build . -- -j$(nproc) && cmake --install .
```

0 comments on commit cdd0cf0

Please sign in to comment.