Skip to content

This repository aims to be a tutorial on how we can train our custom Machine learning model on Edge Impulse platform and deploy the same model on Grove Vision AI V2 using Himax AI platform.

Notifications You must be signed in to change notification settings

HimaxWiseEyePlus/Edge-Impulse-model-on-Himax-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

Deploy custom trained Edge Impulse models on Himax-AI web toolkit

  • This repository explains how to train a custom Machine learning model on Edge Impulse studio and deploy it on Grove Vision AI V2 using Himax AI developer toolkit.
  • To run evaluations using this software, we suggest using Ubuntu 20.04 LTS environment and Google Chrome as your primary.

The repository will follow the process:

EI_model_Himax_AI (1)

How to train model on Edge Impulse?

This section describes how you can collect data, create an impulse, generate features and finally train a model on Edge Impulse platform. Our public project can be found here.

Log into the Edge Impulse Studio

Note: You can make a Get Started here by making a free account

  • Step 1: Create a project using Create New project button.

Screenshot from 2024-04-01 10-49-45

  • Step 2: Project goal: For the purpose of this tutorial, we'll be building an object detection use case to detection two objects: a cup and a computer mouse.

  • Step 3: Data collection

Here we will be utilizing Edge Impulse's data acquisition feature to collect data for our two classes. Alternatively, custom datasets can also be uploaded onto the platform using the 'Add existing data' option or an organizational data bucket can be connected.

Screenshot from 2024-04-01 11-10-51

In our case, 219 photos of both classes were captured and labelled. A ratio of 82%/18% was used for training and testing respectively.

  • Step 4: Train the object detection model

    • Designing an impulse: An impulse is a pipeline used to define the model training flow. It takes in the images, performs feature engineering and uses a learning block to perform the desired task. For greater detail and other applications can be found here

      Starting off with the dataset we collected. We will resize them to 160x160 pixels. This will be the input to our Transfer Learning block.

      Screenshot from 2024-04-01 15-40-34

    • Generating features The feature explorer stage helps a developer to understand and analyze their dataset. The graph on the right displays a dimensionally reduced form of our input data(images). Not only is this conducive as input features to the model but also helps one understand the relationships between different classes.

      Screenshot from 2024-04-01 15-49-23

    • Training model Edge Impulse studio has the support to train multiple model architectures such as MobileNetV1, YOLOv5 or YOLOX. Alternatively, a user can even 'bring their own model'. For this tutorial, we trained a YOLOv5 based on Ultralytics YOLOv5 which supports RGB input at any resolution(square images only).

      Hyperparameters are as follows:

      • Number of epochs: 20
      • Model size: Nano
      • Batch size: 32

      Note: It is important to note that there's a 20 min time limit for training for the community version

      Screenshot from 2024-04-01 16-28-43

      Once the model is trained, it can be downloaded from the dashboard. In order to be accelerated by the Ethos-U NPU the network operators must be quantised to either 8-bit (unsigned or signed) or 16-bit (signed).

      Screenshot from 2024-04-08 13-04-12

Model conversion to vela

Vela, is a tool used to compile a TensorFlow Lite for Microcontrollers neural network model into an optimised version that can run on an embedded system containing an Arm Ethos-U NPU. In order to run flash the model onto the Grove Vision AI v2, we need to convert the tflite int8 file downloaded from Edge Impulse to a _vela.tflite file.

We'll do this on Google CoLab. Once you have a notebook ready, upload your int8 quantised model to Google CoLab using the Upload button on the sidebar.

In code cells, run the following lines:

!pip install ethos-u-vela

And then:

!vela [your_model_name].tflite --accelerator-config ethos-u55-64

Download the converted tflite model from under the output folder.

Screenshot from 2024-04-01 16-20-27

Note: The model outputs an array of [1,1575,7] where 1575 is the number of bounding boxes and 7 represents the format of Edge Impulse's YOLOv5 learn block: (xcenter, ycenter, width, height, score, cls...) where cls... represents the class probabilities. In our case we have two classes.

scenario_app post processing

Building firmware and flash the image for our object detection model onto the Grove Vision AI v2 will be heavily inspired and referenced from Seeed_Grove_Vision_AI_Module_V2.

To clone the repository in the directory of your choice using:

git clone https://github.com/HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2.git

Now we'll copy files from this repository in the Seed_Grove_Vision_AI_Module_V2 folder

cd EPII_CM55M_APP_S
cp -r [location of Edge-Impulse-model-on-Himax-AI]/updated_files/tflm_yolov5_od_ei ./app/scenario_app/
cp -r [location of Edge-Impulse-model-on-Himax-AI]/updated_files/main.c ./app/
cp -r [location of Edge-Impulse-model-on-Himax-AI]/updated_files/spi_protocol.h ./library/spi_ptl/

You can build your own scenario_app or modify one of our existing applications to build firmware for custom applications. . To run the build the firmware for the Edge Impulse YOLOv5 model, we have made an APP_TYPE called tflm_yolov5_od_ei.

  • To run this scenario_app, change the APP_TYPE to tflm_yolov5_od_ei in the makefile.

    APP_TYPE = tflm_yolov5_od_ei
    
  • Build the firmware reference the part of Build the firmware at Linux environment

  • Connect Grove Vision AI v2 to your computer.

  • Make sure Minicom is disconnected.

  • Grant permission to access the device:

    sudo setfacl -m u:[USERNAME]:rw [COM NUMBER]
    

    Where the [USERNAME] is the username of the computer and the [COM NUMBER] is the COM number of your SEEED Grove Vision AI v2. An example is sudo setfacl -m u:kris:rw /dev/ttyACM0

    Note: Use Google Chrome browser for best results.

  • Open Terminal and key-in following command

    • port: the COM number of your Seeed Grove Vision AI Module V2, for example,/dev/ttyACM0
    • baudrate: 921600
    • file: your firmware image [maximum size is 1MB]
    • model: you can burn multiple models [model tflite] [position of model on flash] [offset]
    python3 xmodem/xmodem_send.py --port=[your COM number] --baudrate=921600 --protocol=xmodem --file=we2_image_gen_local/output_case1_sec_wlcsp/output.img --model="model_zoo/tflm_yolov5_od_ei/ei-mouse-vs-cup-object-detection-int8-yolov5-160x160_vela.tflite 0xB7B000 0x00000"
    

    Note: Make sure you create a directory named model_zoo/tflm_yolov5_od_ei and place your vela model inside it.

    • It will start to burn firmware image and model automatically.
    • Press reset button on the Seeed Grove Vision AI Module V2 and it should successfully run the model.

Running on Himax AI toolkit

Himax AI toolkit is a developer's toolkit to inference and run embedded Machine Learning(ML) models.

  • Disconnect the uart at your Tera Term or Minicom first.
  • Download the Himax AI web toolkit and extract the contents
  • Launch the GUI by running the index.html file.
  • Please check you select Grove Vision AI(V2) and press Connect button Screenshot from 2024-04-02 13-11-40

Note: To display your own classes, one just needs to change the class names. For example, ["mouse","cup"] to ["motorcycle","person","bottle"] in index-legacy.77bc29bc.js

About

This repository aims to be a tutorial on how we can train our custom Machine learning model on Edge Impulse platform and deploy the same model on Grove Vision AI V2 using Himax AI platform.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published