Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,56 +1,60 @@
---
title: Introduction to TinyML
title: Overview
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

This Learning Path is about TinyML. It serves as a starting point for learning how cutting-edge AI technologies may be used on even the smallest devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine and target device to facilitate compilation and ensure smooth integration across devices.
## TinyML

In this section, you get an overview of the domain with real-life use-cases and available devices.

## Overview
TinyML represents a significant shift in machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and less processing capabilities. TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.
This Learning Path is about TinyML. It is a starting point for learning how innovative AI technologies can be used on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine and target device to facilitate compilation and ensure smooth integration across devices.

This section provides an overview of the domain with real-life use cases and available devices.

TinyML represents a significant shift in Machine Learning deployment. Unlike traditional Machine Learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and fewer processing capabilities.

TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.

### Benefits and applications

The advantages of TinyML match up well with the Arm architecture, which is widely used in IoT, mobile devices, and edge AI deployments.
The benefits of TinyML align well with the Arm architecture, which is widely used in IoT, mobile devices, and edge AI deployments.

Here are some key benefits of TinyML on Arm:
Here are some of the key benefits of TinyML on Arm:


- **Power Efficiency**: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones.

- **Low Latency**: Because the AI processing happens on-device, there's no need to send data to the cloud, reducing latency and enabling real-time decision-making.
- **Low Latency**: AI processing happens on-device, so there is no need to send data to the cloud, which reduces latency and enables real-time decision-making.

- **Data Privacy**: With on-device computation, sensitive data remains local, providing enhanced privacy and security. This is particularly crucial in healthcare and personal devices.
- **Data Privacy**: With on-device computation, sensitive data remains local, providing enhanced privacy and security. This is a priority in healthcare and personal devices.

- **Cost-Effective**: Arm devices, which are cost-effective and scalable, can now handle sophisticated machine learning tasks, reducing the need for expensive hardware or cloud services.
- **Cost-Effective**: Arm devices, which are cost-effective and scalable, can now handle sophisticated Machine Learning tasks, reducing the need for expensive hardware or cloud services.

- **Scalability**: With billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge.

TinyML is being deployed across multiple industries, enhancing everyday experiences and enabling groundbreaking solutions. The table below contains a few examples of TinyML applications.
TinyML is being deployed across multiple industries, enhancing everyday experiences and enabling groundbreaking solutions. The table below shows some examples of TinyML applications.

| Area | Device, Arm IP | Description |
| ------ | ------- | ------------ |
| Healthcare | Fitbit Charge 5, Cortex-M | Monitor vital signs such as heart rate, detect arrhythmias, and provide real-time feedback. |
| Agriculture | OpenAg, Cortex-M | Monitor soil moisture and optimize water usage. |
| Home automation | Arlo, Cortex-A | Detect objects and people, trigger alerts or actions while saving bandwidth and improving privacy. |
| Industrial IoT | Siemens, Cortex-A | Analyze vibration patterns in machinery to predict when maintenance is needed and prevent breakdowns. |
| Wildlife conservation | Conservation X, Cortex-M | Identify animal movements or detect poachers in remote areas without relying on external power sources. |
| Area | Device, Arm IP | Description |
| ------ | ------- | ------------ |
| Healthcare | Fitbit Charge 5, Cortex-M | To monitor vital signs such as heart rate, detect arrhythmias, and provide real-time feedback. |
| Agriculture | OpenAg, Cortex-M | To monitor soil moisture and optimize water usage. |
| Home automation | Arlo, Cortex-A | To detect objects and people, trigger alerts or actions while saving bandwidth and improving privacy. |
| Industrial IoT | Siemens, Cortex-A | To analyze vibration patterns in machinery to predict when maintenance is needed and prevent breakdowns. |
| Wildlife conservation | Conservation X, Cortex-M | To identify animal movements or detect poachers in remote areas without relying on external power sources. |

### Examples of Arm-based devices

There are many Arm-based devices you can use for TinyML projects. Some of them are listed below, but the list is not exhaustive.
There are many Arm-based devices that you can use for TinyML projects. Some of these are detailed below, but the list is not exhaustive.

#### Raspberry Pi 4 and 5

Raspberry Pi single-board computers are excellent for prototyping TinyML projects. They are commonly used for prototyping machine learning projects at the edge, such as in object detection and voice recognition for home automation.

#### NXP i.MX RT microcontrollers

NXP i.MX RT microcontrollers are low-power microcontrollers that can handle complex TinyML tasks while maintaining energy efficiency, making them ideal for applications like wearable healthcare devices and environmental sensors.
NXP i.MX RT microcontrollers are low-power microcontrollers that can handle complex TinyML tasks while maintaining energy efficiency. This makes them ideal for applications like wearable healthcare devices and environmental sensors.

#### STM32 microcontrollers

Expand All @@ -66,4 +70,4 @@ In addition to hardware, there are software platforms that can help you build Ti

Edge Impulse offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards.

Now that you have an overview of the subject, move on to the next section where you will set up an environment on your host machine.
Now that you have an overview of the subject, you can move on to the next section where you will set up an environment on your host machine.
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,18 @@ cascade:

minutes_to_complete: 40

who_is_this_for: This is an introductory topic for developers, engineers, and data scientists who are new to TinyML and interested in exploring its potential for edge AI. You will learn how to get started using PyTorch and ExecuTorch for TinyML.
who_is_this_for: This is an introductory topic for developers and data scientists new to Tiny Machine Learning (TinyML) who want to explore its potential using PyTorch and ExecuTorch.

learning_objectives:
- Identify how TinyML is different from other AI domains.
- Understand the benefits of deploying AI models on Arm-based edge devices.
- Select Arm-based devices for TinyML.
- Install and configure a TinyML development environment using ExecuTorch and the Corstone-320 FVP.
- Describe what differentiates TinyML from other AI domains.
- Describe the benefits of deploying AI models on Arm-based edge devices.
- Identify suitable Arm-based devices for TinyML applications.
- Set up and configure a TinyML development environment using ExecuTorch and Corstone-320 FVP.

prerequisites:
- Basic knowledge of machine learning concepts.
- Basic knowledge of Machine Learning concepts.
- A Linux host machine or VM running Ubuntu 22.04 or higher.
- A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) or an Arm license to run the Corstone-300 Fixed Virtual Platform (FVP).
- A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) or an Arm license to run the Corstone-320 Fixed Virtual Platform (FVP).


author_primary: Dominica Abena O. Amanfo
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ layout: "learningpathall"

## Define a small neural network using Python

With the development environment ready, you can create a simple PyTorch model to test the setup.
With your development environment set up, you can create a simple PyTorch model to test the setup.

This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between.
This example defines a small feedforward neural network for a classification task. The model consists of two linear layers with ReLU activation in between.

Use a text editor to create a file named `simple_nn.py` with the following code:

Expand Down Expand Up @@ -71,7 +71,7 @@ python -m examples.arm.aot_arm_compiler --model_name=examples/arm/simple_nn.py \
--system_config=Ethos_U85_SYS_DRAM_Mid --memory_mode=Sram_Only
```

From the Arm Examples directory, you build an embedded Arm runner with the `.pte` included. This allows you to get the most performance out of your model, and ensures compatibility with the CPU kernels on the FVP. Finally, generate the executable `arm_executor_runner`.
From the Arm Examples directory, you can build an embedded Arm runner with the `.pte` included. This allows you to optimize the performance of your model, and ensures compatibility with the CPU kernels on the FVP. Finally, generate the executable `arm_executor_runner`.

```bash
cd $HOME/executorch/examples/arm/executor_runner
Expand All @@ -93,7 +93,7 @@ cmake --build $ET_HOME/examples/arm/executor_runner/cmake-out --parallel -- arm_

```

Now run the model on the Corstone-320 with the following command.
Now run the model on the Corstone-320 with the following command:

```bash
FVP_Corstone_SSE-320 \
Expand Down Expand Up @@ -124,4 +124,4 @@ I [executorch:arm_executor_runner.cpp:412] Model in 0x70000000 $
I [executorch:arm_executor_runner.cpp:414] Model PTE file loaded. Size: 3360 bytes.
```

You've now set up your environment for TinyML development on Arm, and tested a small PyTorch and ExecuTorch Neural Network.
You have now set up your environment for TinyML development on Arm, and tested a small PyTorch and ExecuTorch Neural Network.
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ weight: 3
layout: "learningpathall"
---

In this section, you will prepare a development environment to compile a machine learning model. These instructions have been tested on Ubuntu 22.04, 24.04 and on Windows Subsystem for Linux (WSL).
In this section, you will prepare a development environment to compile a Machine Learning model. These instructions have been tested on Ubuntu 22.04, 24.04, and on Windows Subsystem for Linux (WSL).

## Install dependencies

Python3 is required and comes installed with Ubuntu, but some additional packages are needed.
Python3 is required and comes installed with Ubuntu, but some additional packages are needed:

```bash
sudo apt update
Expand All @@ -21,7 +21,7 @@ sudo apt install python-is-python3 python3-dev python3-venv gcc g++ make -y

## Create a virtual environment

Create a Python virtual environment using `python venv`.
Create a Python virtual environment using `python venv`:

```console
python3 -m venv $HOME/executorch-venv
Expand All @@ -40,7 +40,7 @@ git clone https://github.com/pytorch/executorch.git
cd executorch
```

Run the commands below to set up the ExecuTorch internal dependencies.
Run the commands below to set up the ExecuTorch internal dependencies:

```bash
git submodule sync
Expand All @@ -50,7 +50,7 @@ git submodule update --init
```

{{% notice Note %}}
If you run into an issue of `buck` running in a stale environment, reset it by running the following instructions.
If you run into an issue of `buck` running in a stale environment, reset it by running the following instructions:

```bash
ps aux | grep buck
Expand All @@ -70,8 +70,8 @@ executorch 0.6.0a0+3eea1f1

## Next Steps

Your next steps depends on the hardware you have.
Your next steps depend on your hardware.

If you have the Grove Vision AI Module proceed to [Set up the Grove Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/).
If you have the Grove Vision AI Module, proceed to [Set up the Grove Vision AI Module V2 Learning Path](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/).

If you don't have the Grove Vision AI Module, you can use the Corstone-300 FVP instead, proceed to [Set up the Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/).
If you do not have the Grove Vision AI Module, you can use the Corstone-320 FVP instead. See the Learning Path [Set up the Corstone-320 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/).
Original file line number Diff line number Diff line change
Expand Up @@ -10,25 +10,25 @@ layout: "learningpathall"

## Corstone-320 FVP Setup for ExecuTorch

Navigate to the Arm examples directory in the ExecuTorch repository.
Navigate to the Arm examples directory in the ExecuTorch repository:

```bash
cd $HOME/executorch/examples/arm
./setup.sh --i-agree-to-the-contained-eula
```

After the script has finished running, it prints a command to run to finalize the installation. This will add the FVP executable's to your path.
After the script has finished running, it prints a command to run to finalize the installation. This step adds the FVP executables to your system path.

```bash
source $HOME/executorch/examples/arm/ethos-u-scratch/setup_path.sh
```

Test that the setup was successful by running the `run.sh` script for Ethos-U85, which is the target device for Corstone-320.
Test that the setup was successful by running the `run.sh` script for Ethos-U85, which is the target device for Corstone-320:

```bash
./examples/arm/run.sh --target=ethos-u85-256
```

You will see a number of examples run on the FVP.

This confirms the installation, and you can proceed to the next section [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/).
This confirms the installation, so you can now proceed to the Learning Path [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/).
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -9,26 +9,26 @@ layout: "learningpathall"
---
## Before you begin

This section requires the Grove Vision AI Module. Due to its constrained environment, we'll focus on lightweight, optimized tools and models.
This section requires the Grove Vision AI Module. Due to its constrained environment, we will focus on lightweight, optimized, tools and models.

### Compilers

The examples can be built with Arm Compiler for Embedded or Arm GNU Toolchain.

Use the install guides to install each compiler on your host machine:
- [Arm Compiler for Embedded](/install-guides/armclang/)
- [Arm GNU Toolchain](/install-guides/gcc/arm-gnu/)
- [Arm Compiler for Embedded](/install-guides/armclang/).
- [Arm GNU Toolchain](/install-guides/gcc/arm-gnu/).

## Board Setup

![Hardware Overview #center](Overview.png)

Hardware overview : [Image credits](https://wiki.seeedstudio.com/grove_vision_ai_v2/).
Hardware overview: [Image credits](https://wiki.seeedstudio.com/grove_vision_ai_v2/).

1. Download and extract the latest Edge Impulse firmware
Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/seeed-grove-vision-ai-module-v2.zip).

2. Connect the Grove - Vision AI Module V2 to your computer using the USB-C cable.
2. Connect the Grove Vision AI Module V2 to your computer using the USB-C cable.

![Board connection](Connect.png)

Expand All @@ -41,5 +41,10 @@ Ensure the board is properly connected and recognized by your computer.
```console
./flash_linux.sh
```
You have now set up the board successfully. In the next section, you will learn how to use the functionality in the ExecuTorch repository for TinyML, using a hardware emulator.

{{% notice Note %}}
In the next Learning Path in this series, you will incorporate the board into the workflow, running workloads on real hardware.
{{% /notice %}}

Continue to the next page to build a simple PyTorch model.