- This repository is for building and deploying examples on FVP simulate environment.
- The example including
- Person detection example without vela:
- Input image size: 96 x 96 x 1 (Monochrome)
- Using google person detection example model without passing vela run inference with cortex-m55.
- How to use HIMAX config file to generate vela model
- Person detection example run inference with Ethos-U55 NPU:
- Input image size: 96 x 96 x 1 (Monochrome)
- Using google person detection example model passing vela run inference with Ethos-U55 NPU.
- Yolo Fastest Object detection example: (We also provide the model training example)
- Input image size: 256 x 256 x 3 (RGB)
- We only release the model which passes himax_vela.ini (Ethos-U55 64 MACS configuration).
- We can run infernce using the images which captured by our own HIMAX 01B0 sensor.
- Yolo Fastest XL Object detection example: (We also provide the model training example)
- Input image size: 256 x 256 x 3 (RGB)
- We only release the model which passes himax_vela.ini (Ethos-U55 64 MACS configuration).
- We can run infernce using the images which captured by our own HIMAX 01B0 sensor.
- Person detection example without vela:
- To run evaluations using this software, we suggest using Ubuntu 20.04 LTS environment.
- Download the github repository:
git clone https://github.com/HimaxWiseEyePlus/ML_FVP_EVALUATION cd ML_FVP_EVALUATION
- Install the toolkits listed below:
- Install necessary packages:
sudo apt-get update sudo apt-get install cmake sudo apt-get install curl sudo apt install xterm sudo apt install python3 sudo apt install python3.8-venv sudo apt-get install libpython3.8-dev
- Corstone SSE-300 FVP: aligned with the Arm MPS3 development platform and includes both the Cortex-M55 and the Ethos-U55 processors.
# Fetch Corstone SSE-300 FVP wget https://developer.arm.com/-/media/Arm%20Developer%20Community/Downloads/OSS/FVP/Corstone-300/MPS3/FVP_Corstone_SSE-300_Ethos-U55_11.14_24.tgz
# Create folder to be extracted mkdir temp # Extract the archive tar -C temp -xvzf FVP_Corstone_SSE-300_Ethos-U55_11.14_24.tgz
# Execute the self-install script temp/FVP_Corstone_SSE-300_Ethos-U55.sh --i-agree-to-the-contained-eula --no-interactive -d CS300FVP
- GNU Arm Embedded Toolchain 10-2020-q4-major is the only version supports Cortex-M55.
# fetch the arm gcc toolchain. wget https://developer.arm.com/-/media/Files/downloads/gnu-rm/10-2020q4/gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2 # Extract the archive tar -xjf gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2 # Add gcc-arm-none-eabi/bin into PATH environment variable. export PATH="${PATH}:/[location of your GCC_ARM_NONE_EABI_TOOLCHAIN_ROOT]/bin"
- Arm ML embedded evaluation kit Machine Learning (ML) applications targeted for Arm Cortex-M55 and Arm Ethos-U55 NPU.
- We use Arm ML embedded evaluation kit to run the Person detection FVP example.
# Fetch Arm ML embedded evaluation kit wget https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit/+archive/refs/tags/22.02.tar.gz mkdir ml-embedded-evaluation-kit tar -C ml-embedded-evaluation-kit -xvzf 22.02.tar.gz cp -r ./source/application/main/include ./ml-embedded-evaluation-kit/source/application/main cp -r ./source/application/tensorflow-lite-micro/include ./ml-embedded-evaluation-kit/source/application/tensorflow-lite-micro cp -r ./source/profiler/include ./ml-embedded-evaluation-kit/source/profiler cp -r ./source/use_case/ad/include ./ml-embedded-evaluation-kit/source/use_case/ad cp -r ./source/use_case/asr/include ./ml-embedded-evaluation-kit/source/use_case/asr cp -r ./source/use_case/img_class/include ./ml-embedded-evaluation-kit/source/use_case/img_class cp -r ./source/use_case/inference_runner/include ./ml-embedded-evaluation-kit/source/use_case/inference_runner cp -r ./source/use_case/kws/include ./ml-embedded-evaluation-kit/source/use_case/kws cp -r ./source/use_case/kws_asr/include ./ml-embedded-evaluation-kit/source/use_case/kws_asr cp -r ./source/use_case/noise_reduction/include ./ml-embedded-evaluation-kit/source/use_case/noise_reduction cp -r ./source/use_case/object_detection/include ./ml-embedded-evaluation-kit/source/use_case/object_detection cp -r ./source/use_case/vww/include ./ml-embedded-evaluation-kit/source/use_case/vww cp -r download_dependencies.py ./ml-embedded-evaluation-kit/ cp -r set_up_default_resources.py ./ml-embedded-evaluation-kit/ cd ml-embedded-evaluation-kit/ rm -rf ./dependencies python3 ./download_dependencies.py ./build_default.py --npu-config-name ethos-u55-64 #go out ml-embedded-evaluation-kit folder and copy the example resources to ML embedded evaluation kit cd .. cp -r ./resources/img_person_detect ./ml-embedded-evaluation-kit/resources cp -r ./source/use_case/img_person_detect ./ml-embedded-evaluation-kit/source/use_case cp -r ./vela/img_person_detect ./ml-embedded-evaluation-kit/resources_downloaded/ cp -r ./resources/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/resources cp -r ./source/use_case/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/source/use_case cp -r ./vela/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/resources_downloaded/ cp -r ./resources/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/resources cp -r ./source/use_case/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/source/use_case cp -r ./vela/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/resources_downloaded/
- Install necessary packages:
- Go under folder of ml-embedded-evaluation-kit
cd ml-embedded-evaluation-kit
- First, Create the output file and go under the folder
mkdir build_img_person_detect && cd build_img_person_detect
- Second, Configure the person detection example and set ETHOS_U_NPU_ENABLED to be OFF.And you can run only with Cortex-M55.
cmake ../ -DUSE_CASE_BUILD=img_person_detect \-DETHOS_U_NPU_ENABLED=OFF
- Finally, Compile the person detection example.
make -j4
- Go out and under the folder of ML_FVP_EVALUATION
cd ../../
- Run with the commad about
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ml-embedded-evaluation-kit/build_img_person_detect/bin/ethos-u-img_person_detect.axf
- You with see the FVP telnetterminal result below:
- Start inference:
- Run inference:
- Go under vela folder
cd vela
- Install necessary package:
pip install ethos-u-vela
- Run vela with himax config ini file with mac=64 and the person detect example tflite model
vela --accelerator-config ethos-u55-64 --config himax_vela.ini --system-config My_Sys_Cfg --memory-mode My_Mem_Mode_Parent --output-dir ./img_person_detect ./img_person_detect/person_int8_model.tflite
- You will see the vela report on the terminal: (Notes:
Total SRAM used
less than900 KB
will be better)
-
Go under folder of ml-embedded-evaluation-kit
cd ml-embedded-evaluation-kit
-
First, Create the output file and go under the folder
mkdir build_img_person_detect_npu && cd build_img_person_detect_npu
-
Second, Configure the person detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with Cortex-M55 and Ethos-U55 NPU.
cmake ../ -DUSE_CASE_BUILD=img_person_detect \-DETHOS_U_NPU_ENABLED=ON
-
Compile the person detection example
make -j4
- Go out and under the folder of ML_FVP_EVALUATION
cd ../../
- Run with the commad about
Be careful of the
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_person_detect_npu/bin/ethos-u-img_person_detect.axf
ethosu.num_macs
number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail. - You with see the FVP telnetterminal result below:
- Start inference:
- Run inference:
-
Go under folder of ml-embedded-evaluation-kit
cd ml-embedded-evaluation-kit
-
First, Create the output file and go under the folder
mkdir build_img_yolofastest_relu6_256_himax_npu && cd build_img_yolofastest_relu6_256_himax_npu
-
Second, Configure the Yolo Fastest Object detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with only Ethos-U55 NPU.
cmake ../ -DUSE_CASE_BUILD=img_yolofastest_relu6_256_himax \-DETHOS_U_NPU_ENABLED=ON
-
Compile the Yolo Fastest Object detection example
make -j4
- Go out and under the folder of ML_FVP_EVALUATION
cd ../../
- Run with the commad about
Be careful of the
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_yolofastest_relu6_256_himax_npu/bin/ethos-u-img_yolofastest_relu6_256_himax.axf
ethosu.num_macs
number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail. - You with see the FVP telnetterminal result below:
- Start inference:
- Run inference:
-
Go under folder of ml-embedded-evaluation-kit
cd ml-embedded-evaluation-kit
-
First, Create the output file and go under the folder
mkdir build_img_yolofastest_xl_relu6_256_himax_npu && cd build_img_yolofastest_xl_relu6_256_himax_npu
-
Second, Configure the Yolo Fastest XL Object detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with only Ethos-U55 NPU.
cmake ../ -DUSE_CASE_BUILD=img_yolofastest_xl_relu6_256_himax \-DETHOS_U_NPU_ENABLED=ON
-
Compile the Yolo Fastest XL Object detection example
make -j4
- Go out and under the folder of ML_FVP_EVALUATION
cd ../../
- Run with the commad about
Be careful of the
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_yolofastest_xl_relu6_256_himax_npu/bin/ethos-u-img_yolofastest_xl_relu6_256_himax.axf
ethosu.num_macs
number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail. - You with see the FVP telnetterminal result below:
- Start inference:
- Run inference:
-
Add more test image
- You can add more test image under the file address
ml-embedded-evaluation-kit/resources/img_person_detect/samples
,ml-embedded-evaluation-kit/resources/img_yolofastest_relu6_256_himax/samples
andml-embedded-evaluation-kit/resources/img_yolofastest_xl_relu6_256_himax/samples
. Configure and compile the examples again to test more image.
- You can add more test image under the file address
-
If you want to run the example about the mobilenet image classfication example.
- Run inference with vela macs=64 or not
- You should make sure your
ml-embedded-evaluation-kit/source/use_case/img_class/usecase.cmake
is use the vela model is macs 64 or 128 model at line 50. - Your building command while using deault macs 64 model will be
cmake ../ -DUSE_CASE_BUILD=img_class \-DETHOS_U_NPU_ENABLED=ON \-DETHOS_U_NPU_CONFIG_ID=H64 make -j4
- You should make sure your
- Run inference with vela macs=64 or not