DRP-AI TVM1 Application Example (RZ/V2H)
This page explains how to use the application provided in this directory, which is the example to run ResNet18 inference on the target board.
To run the inference with the AI model data compiled by DRP-AI TVM1, inference application is necessary.
This application must use the DRP-AI TVM1 Runtime Library API and must be written in C++.
Here, we explain how to compile and deploy the application example for ResNet 18, which is already compiled in Compile AI models.
File/Directory | Details |
---|---|
exe | Execution environment required for running the application on the board |
toolchain | Application compilation toolchain |
CMakeLists.txt | CMake configuration |
tutorial_app.cpp | C++ application main source code |
*.cpp | Other C++ application source code |
*.h | C++ header file |
README.md | This file. Instructions to use the application. |
Same as Installation.
Move to the application directory and create build
directory.
cd $TVM_ROOT/apps
mkdir build
cd build
Run cmake
command.
cmake -DCMAKE_TOOLCHAIN_FILE=./toolchain/runtime.cmake -DV2H=ON ..
In the build
directory, run the make
command.
make -j$(nproc)
After runinng the make command, following file would be generated in the build
directory.
- tutorial_app
This section assumes that the user has prepared the Boot Environment on the target board.
Copy the following files to the rootfs of Boot Environment.
Name | Path | Details |
---|---|---|
Runtime Library | drp-ai_tvm/obj/build_runtime/${PRODUCT}/libtvm_runtime.so |
Binary provided under obj directory. You should use the libtvm_runtime.so in the directory with the corresponding product name. |
Model Data | drp-ai_tvm/tutorials/resnet* |
Model compiled in the Compile AI models. DRP-AI Preprocessing Runtime Object files, (preprocess directory) are also included. |
Input Data | drp-ai_tvm/apps/exe/sample.bmp |
Windows Bitmap file, which is input data for image classification. |
Label List | drp-ai_tvm/apps/exe/synset_words_imagenet.txt drp-ai_tvm/apps/exe/ImageNetLabels.txt |
synset_words_imagenet.txt :Label list for ResNet18 post-processing.ImageNetLabels.txt :Label list for ResNet50 post-processing when compiling Tensorflow Hub model. |
Application | drp-ai_tvm/apps/build/tutorial_app |
Compiled in this page. |
The rootfs should look like below.
Note that if you compiled the model in Tensorflow Hub, rename the label list ImageNetLabels.txt
to synset_words_imagenet.txt
and use it.
/
└── home
└── root
└── tvm
├── libtvm_runtime.so
├── resnet50_v1_onnx
│ ├── deploy.json
│ ├── deploy.params
│ ├── deploy.so
│ └── preprocess
│ ├── aimac_desc.bin
│ ...
│ └── weight.bin
├── sample.bmp
├── synset_words_imagenet.txt
└── tutorial_app
As a working example, a series of commands is described below.
cd $TVM_ROOT/../
mkdir tvm
cp $TVM_ROOT/obj/build_runtime/$PRODUCT/libtvm_runtime.so tvm/
cp $TVM_ROOT/apps/exe/sample.bmp tvm/
cp $TVM_ROOT/apps/exe/ImageNetLabels.txt tvm/
cp $TVM_ROOT/apps/exe/synset_words_imagenet.txt tvm/
cp $TVM_ROOT/apps/build/tutorial_app* tvm/
cp -r $TVM_ROOT/tutorials/resnet50_v1_onnx tvm/
cp -r $TVM_ROOT/tutorials/resnet18_torch/ tvm/
cp -r $TVM_ROOT/tutorials/resnet50_tflite/ tvm/
cp -r $TVM_ROOT/tutorials/resnet50_v1_onnx_cpu/ tvm/
tar cvfz tvm.tar.gz tvm/
After boot-up the board, move to the directory you stored the application and run the tutorial_app
file.
cd ~
tar xvfz tvm.tar.gz
cd ~/tvm
export LD_LIBRARY_PATH=.
cp -r resnet50_v1_onnx resnet18_onnx
./tutorial_app
#./tutorial_app 5 #run DRP-AI with 315Mhz
rm -r resnet18_onnx
cp -r resnet18_torch resnet18_onnx
./tutorial_app
rm -r resnet18_onnx
cp -r resnet50_v1_onnx_cpu resnet18_onnx
./tutorial_app
rm -r resnet18_onnx
cp synset_words_imagenet.txt synset_words_imagenet.txt.bak
cp ImageNetLabels.txt synset_words_imagenet.txt
cp -r resnet50_tflite resnet18_onnx
./tutorial_app
cp synset_words_imagenet.txt.bak synset_words_imagenet.txt
The application runs the ResNet inference on sample.bmp.
ResNet18: ONNX Model Zoo
The application uses DRP-AI Pre-processing Runtime as pre-processing.
For more details on DRP-AI Pre-processing Runtime, please refer to DRP-AI Pre-processing Runtime Documentation.
Processing details are originally defined in the compile script provided in Compile AI models.
In this example, tutorial_app.cpp
changes its parameter to run following preprocessing.
(Bold is changed parameter.)
- Input data
- Shape: 640x480x3
- Format: BGR
- Order: HWC
- Type: uint8
- Output data
- Shape: 224x224x3
- Format: RGB
- Order: CHW
- Type: float
- Preprocessing operations:
- Resize
- Normalize
- cof_add =[-123.6875, -116.2500, -103.5000]
- cof_mul =[0.0171, 0.0175, 0.0174]
DRP-AI TVM1 Runtime Library API
Regarding the list of DRP-AI TVM1 Runtime API used in the application, please see MERA Wrapper API References
As a preparation, it is required to setup the Build Environment with Linux Package and DRP-AI Support Package.
Follow the instruction in the DRP-AI Support Package Release Note and before running the bitbake
command, carry out the following instructions.
Add the following statement at the end of the build/conf/local.conf
file.
IMAGE_INSTALL_append =" opencv "
Run the bitbake
command as explained in the DRP-AI Support Package.