Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: Overview
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Module Overview
This session delves into TinyML, which applies machine learning to devices with limited resources like microcontrollers. This module serves as a starting point for learning how cutting-edge AI technologies may be put on even the smallest of devices, making Edge AI more accessible and efficient.

Additionally, we'll cover the necessary setup on your host machine and target device to facilitate cross-compilation and ensure smooth integration across all devices.

## Introduction to TinyML
TinyML represents a significant shift in how we approach machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-powered hardware, TinyML is tailored to function on devices with limited resources, such as constrained memory, power, and processing capabilities. TinyML has quickly gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
title: Introduction to TinyML on Arm using PyTorch v2.x and Executorch

minutes_to_complete: 40

who_is_this_for: This learning module is tailored for developers, engineers, and data scientists who are new to TinyML and interested in exploring its potential for edge AI. If you have an interest in deploying machine learning models on low-power, resource-constrained devices, this course will help you get started using PyTorch v2.x and Executorch on Arm-based platforms.

learning_objectives:
- Identify TinyML and how it's different from the AI you might be used to.
- Understand the benefits of deploying AI models on Arm-based edge devices.
- Select Arm-based devices for TinyML.
- Identify real-world use cases demonstrating the impact of TinyML in various industries.
- Install and configure a TinyML development environment.
- Set up a cross-compilation environment on your host machine.
- Perform best practices for ensuring optimal performance on constrained edge devices.


prerequisites:
- Basic knowledge of machine learning concepts.
- Understanding of IoT and embedded systems (helpful but not required).
- A Linux host machine or VM running Ubuntu 20.04 or higher, or an AWS account to use [Arm Virtual Hardware](https://www.arm.com/products/development-tools/simulation/virtual-hardware)
- Target device, phyisical or using the or Corstone-300 FVP, preferably Cortex-M boards but you can use Cortex-A7 boards as well.


author_primary: Dominica Abena O. Amanfo

### Tags
skilllevels: Introductory
subjects: ML
armips:
- Cortex-A
- Cortex-M

operatingsystems:
- Linux

tools_software_languages:
- Corstone 300 FVP
- Grove - Vision AI Module V2
- Python
- PyTorch v2.x
- Executorch
- Arm Compute Library
- GCC
- Edge Impulse
- Nodejs

### FIXED, DO NOT MODIFY
# ================================================================================
weight: 1 # _index.md always has weight of 1 to order correctly
layout: "learningpathall" # All files under learning paths have this same wrapper
learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
next_step_guidance: Research which lightweight ML models can be used for TinyML, and what device is best for a specific project.

recommended_path: /learning-paths/microcontrollers/intro/


further_reading:
- resource:
title: TinyML Brings AI to Smallest Arm Devices
link: https://newsroom.arm.com/blog/tinyml
type: blog



# ================================================================================
# FIXED, DO NOT MODIFY
# ================================================================================
weight: 21 # set to always be larger than the content in this path, and one more than 'review'
title: "Next Steps" # Always the same
layout: "learningpathall" # All files under learning paths have this same wrapper
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
review:
- questions:
question: >
1. What is TinyML?
answers:
- "Machine learning models designed to run on large, cloud-based servers."
- "Machine learning models designed to run on resource-constrained devices like microcontrollers and edge devices."
- A cloud service for deep learning model deployment.
- A special type of machine learning for virtual reality applications
correct_answer: 2
explanation: >
TinyML is specifically designed to operate on devices with limited computational resources.

- questions:
question: >
2. Which of the following is NOT a benefit of deploying TinyML on Arm devices?
answers:
- "Enhanced data privacy"
- "Low latency"
- High power consumption
- Cost-effectiveness
correct_answer: 3
explanation: >
High power consumption is not a benefit, one of the key advantages of TinyML on Arm devices is their ability to perform tasks with very low power usage.

- questions:
question: >
3. Which of the following is an example of TinyML in healthcare?
answers:
- Smart sensors for soil moisture monitoring.
- Wearable devices monitoring vital signs and detecting heart arrhythmias.
- Predictive maintenance for factory machines.
- Object detection for smart home cameras.
correct_answer: 2
explanation: >
Wearable devices that monitor vital signs and detect heart arrhythmias show TinyML's ability to perform complex analyses in real-time on resource-constrained device.



# ================================================================================
# FIXED, DO NOT MODIFY
# ================================================================================
title: "Review" # Always the same title
weight: 20 # Set to always be larger than the content in this path
layout: "learningpathall" # All files under learning paths have this same wrapper
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
title: Real-World Applications of TinyML with Examples of Arm-Based Solutions
weight: 5

### FIXED, DO NOT MODIFY
layout: learningpathall
---

TinyML is being deployed across various industries, enhancing everyday experiences and enabling groundbreaking solutions. Here are a few examples:

## Healthcare - Wearable Heart Rate Monitors
- Arm-based microcontrollers like those in Fitbit devices run TinyML models to monitor vital signs such as heart rate, detect arrhythmias, and provide real-time feedback.

- **Example**: Fitbit Charge 5 smart band uses the Arm 32-bit Cortex-M4 processor.

## Agriculture - Smart Irrigation Systems
- Arm-powered microcontrollers in smart sensors help monitor soil moisture and control water usage. TinyML models process environmental data locally to optimize water distribution.
- **Example**: OpenAg uses Arm Cortex-M processors to run machine learning models on edge devices, optimizing irrigation based on real-time data.

## Home Automation - Smart Cameras
- Arm-based processors in smart cameras can detect objects and people, triggering alerts or actions without needing to send data to the cloud, saving bandwidth and improving privacy.
- **Example**: Arlo smart cameras, powered by Arm Cortex processors, perform object detection at the edge, enhancing performance and energy efficiency.

## Industrial IoT - Predictive Maintenance in Factories (e.g., Siemens Predictive Maintenance)
- Arm-powered industrial sensors analyze vibration patterns in machinery, running TinyML models to predict when maintenance is needed and prevent breakdowns.
- **Example**: Siemens utilizes Arm Cortex-A processors in industrial sensors for real-time data analysis, detecting faults before they cause significant downtime. They rely on Arm-based processors for their Industrial Edge computing solutions.

## Wildlife Conservation - Smart Camera Traps (e.g., Conservation X Labs)
- Arm-based smart camera traps can identify animal movements or detect poachers using TinyML models. These energy-efficient devices can operate in remote areas without relying on external power sources.
- **Example**: Conservation X Labs uses Arm Cortex-M microcontrollers to power camera traps, helping monitor endangered species in the wild.




Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: Benefits of TinyML for Edge Computing on Arm Devices
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---

The advantages of TinyML for edge computing on Arm devices are vast, particularly when paired with Arm's architecture, which is widely used in IoT, mobile devices, and edge AI deployments. Here are some key benefits:

- Power Efficiency: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones.
- Low Latency: Because the AI processing happens on-device, there's no need to send data to the cloud, reducing latency and enabling real-time decision-making.
- Data Privacy: With on-device computation, sensitive data remains local, providing enhanced privacy and security. This is particularly crucial in healthcare and personal devices.
- Cost-Effective: Arm devices, which are cost-effective and scalable, can now handle sophisticated machine learning tasks, reducing the need for expensive hardware or cloud services.
- Scalability: With billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge.
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
---
# User change
title: "Build a Simple PyTorch Model"

weight: 9 # 1 is first, 2 is second, etc.

# Do not modify these elements
layout: "learningpathall"
---

With our Environment ready, we will create a simple program to test our setup. This example will define a simple feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. Create a file called simple_nn.py with the following code:

```python
import torch
from torch.export import export
from executorch.exir import to_edge

# Define a simple Feedforward Neural Network
class SimpleNN(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(SimpleNN, self).__init__()
self.fc1 = torch.nn.Linear(input_size, hidden_size)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(hidden_size, output_size)

def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out

# Create the model instance
input_size = 10 # example input features size
hidden_size = 5 # hidden layer size
output_size = 2 # number of output classes

model = SimpleNN(input_size, hidden_size, output_size)

# Example input tensor (batch size 1, input size 10)
x = torch.randn(1, input_size)

# torch.export: Defines the program with the ATen operator set for SimpleNN.
aten_dialect = export(model, (x,))

# to_edge: Make optimizations for edge devices. This ensures the model runs efficiently on constrained hardware.
edge_program = to_edge(aten_dialect)

# to_executorch: Convert the graph to an ExecuTorch program
executorch_program = edge_program.to_executorch()

# Save the compiled .pte program
with open("simple_nn.pte", "wb") as file:
file.write(executorch_program.buffer)

print("Model successfully exported to simple_nn.pte")
```

Run it from your terminal:

```console
python3 simple_nn.py
```

If everything runs successfully, the output will be:
```bash { output_lines = "1" }
Model successfully exported to simple_nn.pte
```
Finally, the model is saved as a .pte file, which is the format used by ExecuTorch for deploying models to the edge.

Now, we will run the ExecuTorch version, first run:

```console
# Clean and configure the build system
rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..

# Build the executor_runner target
cmake --build cmake-out --target executor_runner -j9
```

You should see an output similar to:
```bash { output_lines = "1" }
[100%] Built target executor_runner
```

Now, run the executor_runner with the Model:
```console
./cmake-out/executor_runner --model_path simple_nn.pte
```

Expected Output: Since the model is a simple feedforward model, you can expect a tensor of shape [1, 2]

```bash { output_lines = "1-3" }
Input tensor shape: [1, 10]
Output tensor shape: [1, 2]
Inference output: tensor([[0.5432, -0.3145]]) #will vary due to random initialization
```

If the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes.

Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
# User change
title: "Environment Setup on Host Machine"

weight: 6 # 1 is first, 2 is second, etc.

# Do not modify these elements
layout: "learningpathall"
---
## Before you begin

These instructions have been tested on:
- A GCP Arm-based Tau T2A Virtual Machine instance Running Ubuntu 22.04 LTS.
- Host machine with Ubuntu 24.04 on x86_64 architecture.
- Windows Subsystem for Linux (WSL): Windows x86_64

The host machine is where you will perform most of your development work, especially cross-compiling code for the target Arm devices.

- The Ubuntu version should be `20.04 or higher`.
- If you do not have the board, the `x86_64` architecture must be used because the Corstone-300 FVP is not currently available for the Arm architecture.
- Though Executorch supports Windows via WSL, it is limited in resource.


## Install Executorch

1. Follow the [Setting Up ExecuTorch guide](https://pytorch.org/executorch/stable/getting-started-setup.html ) to install it.

2. Activate the `executorch` virtual environment from the installation guide to ensure it is ready for use:

```console
conda activate executorch
```

## Install PyTorch
The latest version needs Python 3.8 or later

```console
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

```

## Install Edge Impulse CLI
1. Create an [Edge Impulse Account](https://studio.edgeimpulse.com/signup) if you do not have one

2. Install the CLI tools in your terminal

Ensure you have Nodejs installed

```console
node -v
```
Install the Edge Impulse CLI
```console
npm install -g edge-impulse-cli
```
3. Install Edge Impulse Screen
```console
sudo apt install screen
```

## Next Steps
1. If you don't have access to the physical board: Go to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/)
2. If you have access to the board: Go to [Setup on Grove - Vision AI Module V2](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/)
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
# User change
title: "Environment Setup Corstone-300 FVP"

weight: 7 # 1 is first, 2 is second, etc.

# Do not modify these elements
layout: "learningpathall"
---

### Corstone-300 FVP Setup for ExecuTorch

To install and set up the Corstone-300 FVP on your machine, refer to [Building and Running ExecuTorch with ARM Ethos-U Backend](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html). Follow this tutorial till the **"Install the TOSA reference model"** Section. It should be the last thing you do from this tutorial.


## Next Steps
1. Go to [Build a Simple PyTorch Model](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup.
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title: Examples of Arm-based devices and applications
weight: 3

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Raspberry Pi 4 & 5

These affordable single-board computers are excellent for prototyping TinyML projects. They are commonly used for prototyping machine learning projects at the edge, such as in object detection and voice recognition for home automation.

## NXP i.MX RT Microcontrollers
These are low-power microcontrollers that can handle complex TinyML tasks while maintaining energy efficiency, making them ideal for applications like wearable healthcare devices and environmental sensors.

## STM32 Microcontrollers
Used in industrial IoT applications for predictive maintenance, these microcontrollers are energy-efficient and capable of running TinyML models for real-time anomaly detection in factory machinery.

## Arduino Nano 33 BLE Sense
This microcontroller, equipped with a suite of sensors, supports TinyML and is ideal for small-scale IoT applications, such as detecting environmental changes and movement patterns.

## Edge Impulse
This platform offers a suite of tools that enables developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards.
Loading