diff --git a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/deployment.png b/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/deployment.png index 31124114e4..dfd935377f 100644 Binary files a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/deployment.png and b/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/deployment.png differ diff --git a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_actions.png b/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_actions.png deleted file mode 100644 index 7b2382db06..0000000000 Binary files a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_actions.png and /dev/null differ diff --git a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_model_path.png b/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_model_path.png deleted file mode 100644 index 56925b1b15..0000000000 Binary files a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_model_path.png and /dev/null differ diff --git a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_releases.png b/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_releases.png deleted file mode 100644 index 96c1d7ca21..0000000000 Binary files a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/assets/github_releases.png and /dev/null differ diff --git a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/content.md b/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/content.md index 36e88aba65..7dc7e8029f 100644 --- a/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/content.md +++ b/content/hardware/06.nicla/boards/nicla-vision/tutorials/image-classification/content.md @@ -143,33 +143,18 @@ The ML model is trained and already optimized to be used with microcontrollers. ### Deploy -Deploying the ML model to your board requires a few steps. The Edge Impulse® Studio provides an export feature for OpenMV. Switch to the deployment section in the menu, select OpenMV under "Build firmware" and click "build". This will create an OpenMV compatible library and download it as a zip file. Unzip it. +Deploying the ML model to your board requires a few simple steps. The Edge Impulse® Studio provides an export feature for OpenMV. Since the Nicla Vision doesn't have any on-board SRAM we need to bake the machine learning model into the firmware and load it from the flash. Switch to the deployment section in the menu, select "OpenMV Firmware" under "Configure your deployment" and click "build". This will create an OpenMV compatible firmware that includes the machine learning model. Unzip the file once it's downloaded. The download should start automatically. ![The Edge Impulse® Studio has a built-in export function for OpenMV](assets/deployment.png) -Since the Nicla Vision doesn't have any on-board SRAM we need to build the machine learning model into the firmware and load it from the flash. To do so, go to https://github.com/openmv/openmv and [fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository. In your fork click on "Actions" and enable the workflows by clicking on the green button. -Rename the machine learning model and the label file to fruit_detection.tflite and fruit_detection.txt respectively. In your fork, replace the built-in machine learning model under `src/lib/libtf/models` with the model you downloaded from Edge Impulse® Studio. Commit the files and push the commit to the repository. It will build a new firmware automatically. - -![The model that shall be baked into the firmware needs to be stored under src/lib/libtf/models](assets/github_model_path.png) - -You can inspect the build process under "Actions". - -![In the actions section you can monitor the build process once it starts.](assets/github_actions.png) - -Once the firmware has been built you can download it from the releases section that you can find in the "Code" tab. Put the board in bootloader mode and click on the connect symbol in the OpenMV IDE. In the dialog select "Load a specific firmware". Select `firmware.bin` in the folder that you just created and flash it to the board. - -![In the release section you can find the generated firmware ready to download and install.](assets/github_releases.png) +Put the board in bootloader mode and click on the connect symbol in the OpenMV IDE. In the dialog select "Load a specific firmware". Select `edge_impulse_firmware_arduino_nicla_vision.bin` in the folder that you created while unziping the downloaded file and flash it to the board. ### Run the Script -The final step is to run the **ei_image_classification.py** script. Open it in the OpenMV. As the model is now baked into the firmware you need to adjust the lines where it loads the model and the labels as follows: +The final step is to run the **ei_image_classification.py** script. Open it in the OpenMV IDE. -```python -labels, net = tf.load_builtin_model('fruit_detection') -``` - -Also, replace the print statement in the innermost for loop with the following code: +Replace the print statement in the innermost for loop with the following code: ```python confidence = predictions_list[i][1] @@ -199,7 +184,7 @@ sensor.set_hmirror(True) sensor.set_windowing((240, 240)) # Set 240x240 window. sensor.skip_frames(time=2000) # Let the camera adjust. -labels, net = tf.load_builtin_model('fruit_detection') +labels, net = tf.load_builtin_model('trained') clock = time.clock() while(True): @@ -231,6 +216,4 @@ You have learned about classification as a machine learning concept which catego ## Troubleshooting -### GitHub Workflow - -If you’re encountering difficulties while using the GitHub workflow in your forked repository, make sure that you chose `MobileNetV2 96x96 0.1` as model type, otherwise the model will likely be too big. Also make sure you are changing the files in your own, forked repository and not in the original OpenMV repository. +If you’re encountering difficulties while deploying your machine learning model, make sure that you chose `MobileNetV2 96x96 0.1` as model type, otherwise the model will likely be too big.