From 8791306f7456411751c3bda49e2f9280a6989356 Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Tue, 28 Jan 2025 19:42:03 +0000 Subject: [PATCH] review Introduction to TinyML --- .../Overview-1.md | 8 +++---- .../introduction-to-tinyml-on-arm/_index.md | 4 ++-- .../_next-steps.md | 8 +++++++ .../build-model-8.md | 4 ++-- .../env-setup-5.md | 14 ++++++----- .../env-setup-6-FVP.md | 12 +++++++--- .../setup-7-Grove.md | 23 ++++++++----------- 7 files changed, 42 insertions(+), 31 deletions(-) diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md index f9dcd7cf00..d57c28a609 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md @@ -6,12 +6,12 @@ weight: 2 layout: learningpathall --- -This Learning Path is about TinyML. It serves as a starting point for learning how cutting-edge AI technologies may be put on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to setup on your host machine and target device to facilitate compilation and ensure smooth integration across all devices. +This Learning Path is about TinyML. It serves as a starting point for learning how cutting-edge AI technologies may be used on even the smallest devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine and target device to facilitate compilation and ensure smooth integration across devices. In this section, you get an overview of the domain with real-life use-cases and available devices. ## Overview -TinyML represents a significant shift in machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-powered hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and less processing capabilities. TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems. +TinyML represents a significant shift in machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and less processing capabilities. TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems. ### Benefits and applications @@ -42,7 +42,7 @@ TinyML is being deployed across multiple industries, enhancing everyday experien ### Examples of Arm-based devices -There are many Arm-based off-the-shelf devices you can use for TinyML projects. Some of them are listed below, but the list is not exhaustive. +There are many Arm-based devices you can use for TinyML projects. Some of them are listed below, but the list is not exhaustive. #### Raspberry Pi 4 and 5 @@ -64,6 +64,6 @@ The Arduino Nano, equipped with a suite of sensors, supports TinyML and is ideal In addition to hardware, there are software platforms that can help you build TinyML applications. -Edge Impulse platform offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards. +Edge Impulse offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards. Now that you have an overview of the subject, move on to the next section where you will set up an environment on your host machine. \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md index 50be10a4e9..020c254f54 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md @@ -14,13 +14,13 @@ learning_objectives: - Understand the benefits of deploying AI models on Arm-based edge devices. - Select Arm-based devices for TinyML. - Install and configure a TinyML development environment. - - Perform best practices for ensuring optimal performance on constrained edge devices. + - Apply best practices for ensuring optimal performance on constrained edge devices. prerequisites: - Basic knowledge of machine learning concepts. - A Linux host machine or VM running Ubuntu 22.04 or higher. - - A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) **or** an Arm license to run the Corstone-300 Fixed Virtual Platform (FVP). + - A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) or an Arm license to run the Corstone-300 Fixed Virtual Platform (FVP). author_primary: Dominica Abena O. Amanfo diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md index bd83caede5..4406277e64 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md @@ -9,6 +9,14 @@ further_reading: title: TinyML Brings AI to Smallest Arm Devices link: https://newsroom.arm.com/blog/tinyml type: blog + - resource: + title: Arm Compiler for Embedded + link: https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded + type: documentation + - resource: + title: Arm GNU Toolchain + link: https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain + type: documentation diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md index 9a04810222..ebd7042ba6 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md @@ -1,6 +1,6 @@ --- # User change -title: "Build a Simple PyTorch Model" +title: "Build a simple PyTorch model" weight: 7 # 1 is first, 2 is second, etc. @@ -8,7 +8,7 @@ weight: 7 # 1 is first, 2 is second, etc. layout: "learningpathall" --- -With our environment ready, you can create a simple program to test the setup. +With the development environment ready, you can create a simple PyTorch model to test the setup. This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md index 31af1f637f..b364289cb0 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md @@ -8,7 +8,7 @@ weight: 3 layout: "learningpathall" --- -In this section, you will prepare a development environment to compile the model. These instructions have been tested on Ubuntu 22.04, 24.04 and on Windows Subsystem for Linux (WSL). +In this section, you will prepare a development environment to compile a machine learning model. These instructions have been tested on Ubuntu 22.04, 24.04 and on Windows Subsystem for Linux (WSL). ## Install dependencies @@ -27,7 +27,7 @@ Create a Python virtual environment using `python venv`. python3 -m venv $HOME/executorch-venv source $HOME/executorch-venv/bin/activate ``` -The prompt of your terminal now has (executorch) as a prefix to indicate the virtual environment is active. +The prompt of your terminal now has `(executorch)` as a prefix to indicate the virtual environment is active. ## Install Executorch @@ -40,11 +40,11 @@ git clone https://github.com/pytorch/executorch.git cd executorch ``` -Run a few commands to set up the ExecuTorch internal dependencies. +Run the commands below to set up the ExecuTorch internal dependencies. + ```bash git submodule sync git submodule update --init - ./install_requirements.sh ``` @@ -59,6 +59,8 @@ pkill -f buck ## Next Steps -If you don't have the Grove AI vision board, use the Corstone-300 FVP proceed to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/) +Your next steps depends on the hardware you have. + +If you have the Grove Vision AI Module proceed to [Set up the Grove Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/). -If you have the Grove board proceed to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file +If you don't have the Grove Vision AI Module, you can use the Corstone-300 FVP instead, proceed to [Set up the Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/). diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md index 42d2d53d59..95c04d7397 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md @@ -10,20 +10,26 @@ layout: "learningpathall" ## Corstone-300 FVP Setup for ExecuTorch -Navigate to the Arm examples directory in the ExecuTorch repository. +Navigate to the Arm examples directory in the ExecuTorch repository and configure the Fixed Virtual Platform (FVP). + ```bash cd $HOME/executorch/examples/arm ./setup.sh --i-agree-to-the-contained-eula ``` +Set the environment variables for the FVP. + ```bash export FVP_PATH=${pwd}/ethos-u-scratch/FVP-corstone300/models/Linux64_GCC-9.3 export PATH=$FVP_PATH:$PATH ``` -Test that the setup was successful by running the `run.sh` script. + +Confirm the installation was successful by running the `run.sh` script. ```bash ./run.sh ``` -You will see a number of examples run on the FVP. This means you can proceed to the next section [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. \ No newline at end of file +You will see a number of examples run on the FVP. + +This confirms the installation, and you can proceed to the next section [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/). \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md index 9d1fbb4c58..c3dbc2b581 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md @@ -8,18 +8,16 @@ weight: 6 # 1 is first, 2 is second, etc. layout: "learningpathall" --- ## Before you begin -Only follow this part of the tutorial if you have the board. Due to its constrained environment, we'll focus on lightweight, optimized tools and models (which will be introduced in the next learning path). +This section requires the Grove Vision AI Module. Due to its constrained environment, we'll focus on lightweight, optimized tools and models. ### Compilers -The examples can be built with [Arm Compiler for Embedded](https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded) or [Arm GNU Toolchain](https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain). +The examples can be built with Arm Compiler for Embedded or Arm GNU Toolchain. - -Use the install guides to install the compilers on your **host machine**: +Use the install guides to install each compiler on your host machine: - [Arm Compiler for Embedded](/install-guides/armclang/) -- [Arm GNU Toolchain](/install-guides/gcc/arm-gnu) - +- [Arm GNU Toolchain](/install-guides/gcc/arm-gnu/) ## Board Setup @@ -30,7 +28,6 @@ Hardware overview : [Image credits](https://wiki.seeedstudio.com/grove_vision_ai 1. Download and extract the latest Edge Impulse firmware Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/seeed-grove-vision-ai-module-v2.zip). - 2. Connect the Grove - Vision AI Module V2 to your computer using the USB-C cable. ![Board connection](Connect.png) @@ -39,12 +36,10 @@ Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/see Ensure the board is properly connected and recognized by your computer. {{% /notice %}} -3. In the extracted Edge Impulse firmware, locate and run the installation scripts to flash your device. - -```console -./flash_linux.sh -``` +3. In the extracted Edge Impulse firmware, locate and run the `flash_linux.sh` script to flash your device. + ```console + ./flash_linux.sh + ``` -## Next Steps -1. Go to [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. +Continue to the next page to build a simple PyTorch model.