diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Connect.png b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Connect.png new file mode 100644 index 0000000000..6af713b403 Binary files /dev/null and b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Connect.png differ diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md new file mode 100644 index 0000000000..1e6831cadb --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md @@ -0,0 +1,15 @@ +--- +title: Overview +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Module Overview +This session delves into TinyML, which applies machine learning to devices with limited resources like microcontrollers. This module serves as a starting point for learning how cutting-edge AI technologies may be put on even the smallest of devices, making Edge AI more accessible and efficient. + +Additionally, we'll cover the necessary setup on your host machine and target device to facilitate cross-compilation and ensure smooth integration across all devices. + +## Introduction to TinyML +TinyML represents a significant shift in how we approach machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-powered hardware, TinyML is tailored to function on devices with limited resources, such as constrained memory, power, and processing capabilities. TinyML has quickly gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems. diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview.png b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview.png new file mode 100644 index 0000000000..cbcd944107 Binary files /dev/null and b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview.png differ diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md new file mode 100644 index 0000000000..7cf7bcd240 --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md @@ -0,0 +1,53 @@ +--- +title: Introduction to TinyML on Arm using PyTorch v2.x and Executorch + +minutes_to_complete: 40 + +who_is_this_for: This learning module is tailored for developers, engineers, and data scientists who are new to TinyML and interested in exploring its potential for edge AI. If you have an interest in deploying machine learning models on low-power, resource-constrained devices, this course will help you get started using PyTorch v2.x and Executorch on Arm-based platforms. + +learning_objectives: + - Identify TinyML and how it's different from the AI you might be used to. + - Understand the benefits of deploying AI models on Arm-based edge devices. + - Select Arm-based devices for TinyML. + - Identify real-world use cases demonstrating the impact of TinyML in various industries. + - Install and configure a TinyML development environment. + - Set up a cross-compilation environment on your host machine. + - Perform best practices for ensuring optimal performance on constrained edge devices. + + +prerequisites: + - Basic knowledge of machine learning concepts. + - Understanding of IoT and embedded systems (helpful but not required). + - A Linux host machine or VM running Ubuntu 20.04 or higher, or an AWS account to use [Arm Virtual Hardware](https://www.arm.com/products/development-tools/simulation/virtual-hardware) + - Target device, phyisical or using the or Corstone-300 FVP, preferably Cortex-M boards but you can use Cortex-A7 boards as well. + + +author_primary: Dominica Abena O. Amanfo + +### Tags +skilllevels: Introductory +subjects: ML +armips: + - Cortex-A + - Cortex-M + +operatingsystems: + - Linux + +tools_software_languages: + - Corstone 300 FVP + - Grove - Vision AI Module V2 + - Python + - PyTorch v2.x + - Executorch + - Arm Compute Library + - GCC + - Edge Impulse + - Nodejs + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md new file mode 100644 index 0000000000..3be5790119 --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md @@ -0,0 +1,21 @@ +--- +next_step_guidance: Research which lightweight ML models can be used for TinyML, and what device is best for a specific project. + +recommended_path: /learning-paths/microcontrollers/intro/ + + +further_reading: + - resource: + title: TinyML Brings AI to Smallest Arm Devices + link: https://newsroom.arm.com/blog/tinyml + type: blog + + + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +weight: 21 # set to always be larger than the content in this path, and one more than 'review' +title: "Next Steps" # Always the same +layout: "learningpathall" # All files under learning paths have this same wrapper +--- diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_review.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_review.md new file mode 100644 index 0000000000..a86edfaac5 --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_review.md @@ -0,0 +1,47 @@ +--- +review: + - questions: + question: > + 1. What is TinyML? + answers: + - "Machine learning models designed to run on large, cloud-based servers." + - "Machine learning models designed to run on resource-constrained devices like microcontrollers and edge devices." + - A cloud service for deep learning model deployment. + - A special type of machine learning for virtual reality applications + correct_answer: 2 + explanation: > + TinyML is specifically designed to operate on devices with limited computational resources. + + - questions: + question: > + 2. Which of the following is NOT a benefit of deploying TinyML on Arm devices? + answers: + - "Enhanced data privacy" + - "Low latency" + - High power consumption + - Cost-effectiveness + correct_answer: 3 + explanation: > + High power consumption is not a benefit, one of the key advantages of TinyML on Arm devices is their ability to perform tasks with very low power usage. + + - questions: + question: > + 3. Which of the following is an example of TinyML in healthcare? + answers: + - Smart sensors for soil moisture monitoring. + - Wearable devices monitoring vital signs and detecting heart arrhythmias. + - Predictive maintenance for factory machines. + - Object detection for smart home cameras. + correct_answer: 2 + explanation: > + Wearable devices that monitor vital signs and detect heart arrhythmias show TinyML's ability to perform complex analyses in real-time on resource-constrained device. + + + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +title: "Review" # Always the same title +weight: 20 # Set to always be larger than the content in this path +layout: "learningpathall" # All files under learning paths have this same wrapper +--- diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/applications-4.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/applications-4.md new file mode 100644 index 0000000000..cef33546dc --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/applications-4.md @@ -0,0 +1,34 @@ +--- +title: Real-World Applications of TinyML with Examples of Arm-Based Solutions +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +TinyML is being deployed across various industries, enhancing everyday experiences and enabling groundbreaking solutions. Here are a few examples: + +## Healthcare - Wearable Heart Rate Monitors +- Arm-based microcontrollers like those in Fitbit devices run TinyML models to monitor vital signs such as heart rate, detect arrhythmias, and provide real-time feedback. + +- **Example**: Fitbit Charge 5 smart band uses the Arm 32-bit Cortex-M4 processor. + +## Agriculture - Smart Irrigation Systems +- Arm-powered microcontrollers in smart sensors help monitor soil moisture and control water usage. TinyML models process environmental data locally to optimize water distribution. +- **Example**: OpenAg uses Arm Cortex-M processors to run machine learning models on edge devices, optimizing irrigation based on real-time data. + +## Home Automation - Smart Cameras +- Arm-based processors in smart cameras can detect objects and people, triggering alerts or actions without needing to send data to the cloud, saving bandwidth and improving privacy. +- **Example**: Arlo smart cameras, powered by Arm Cortex processors, perform object detection at the edge, enhancing performance and energy efficiency. + +## Industrial IoT - Predictive Maintenance in Factories (e.g., Siemens Predictive Maintenance) +- Arm-powered industrial sensors analyze vibration patterns in machinery, running TinyML models to predict when maintenance is needed and prevent breakdowns. +- **Example**: Siemens utilizes Arm Cortex-A processors in industrial sensors for real-time data analysis, detecting faults before they cause significant downtime. They rely on Arm-based processors for their Industrial Edge computing solutions. + +## Wildlife Conservation - Smart Camera Traps (e.g., Conservation X Labs) +- Arm-based smart camera traps can identify animal movements or detect poachers using TinyML models. These energy-efficient devices can operate in remote areas without relying on external power sources. +- **Example**: Conservation X Labs uses Arm Cortex-M microcontrollers to power camera traps, helping monitor endangered species in the wild. + + + + diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/benefits-3.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/benefits-3.md new file mode 100644 index 0000000000..7d4ef0c90f --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/benefits-3.md @@ -0,0 +1,15 @@ +--- +title: Benefits of TinyML for Edge Computing on Arm Devices +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +The advantages of TinyML for edge computing on Arm devices are vast, particularly when paired with Arm's architecture, which is widely used in IoT, mobile devices, and edge AI deployments. Here are some key benefits: + +- Power Efficiency: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones. +- Low Latency: Because the AI processing happens on-device, there's no need to send data to the cloud, reducing latency and enabling real-time decision-making. +- Data Privacy: With on-device computation, sensitive data remains local, providing enhanced privacy and security. This is particularly crucial in healthcare and personal devices. +- Cost-Effective: Arm devices, which are cost-effective and scalable, can now handle sophisticated machine learning tasks, reducing the need for expensive hardware or cloud services. +- Scalability: With billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge. diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md new file mode 100644 index 0000000000..8d591f636a --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md @@ -0,0 +1,99 @@ +--- +# User change +title: "Build a Simple PyTorch Model" + +weight: 9 # 1 is first, 2 is second, etc. + +# Do not modify these elements +layout: "learningpathall" +--- + +With our Environment ready, we will create a simple program to test our setup. This example will define a simple feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. Create a file called simple_nn.py with the following code: + +```python +import torch +from torch.export import export +from executorch.exir import to_edge + +# Define a simple Feedforward Neural Network +class SimpleNN(torch.nn.Module): + def __init__(self, input_size, hidden_size, output_size): + super(SimpleNN, self).__init__() + self.fc1 = torch.nn.Linear(input_size, hidden_size) + self.relu = torch.nn.ReLU() + self.fc2 = torch.nn.Linear(hidden_size, output_size) + + def forward(self, x): + out = self.fc1(x) + out = self.relu(out) + out = self.fc2(out) + return out + +# Create the model instance +input_size = 10 # example input features size +hidden_size = 5 # hidden layer size +output_size = 2 # number of output classes + +model = SimpleNN(input_size, hidden_size, output_size) + +# Example input tensor (batch size 1, input size 10) +x = torch.randn(1, input_size) + +# torch.export: Defines the program with the ATen operator set for SimpleNN. +aten_dialect = export(model, (x,)) + +# to_edge: Make optimizations for edge devices. This ensures the model runs efficiently on constrained hardware. +edge_program = to_edge(aten_dialect) + +# to_executorch: Convert the graph to an ExecuTorch program +executorch_program = edge_program.to_executorch() + +# Save the compiled .pte program +with open("simple_nn.pte", "wb") as file: + file.write(executorch_program.buffer) + +print("Model successfully exported to simple_nn.pte") +``` + +Run it from your terminal: + +```console +python3 simple_nn.py +``` + +If everything runs successfully, the output will be: +```bash { output_lines = "1" } +Model successfully exported to simple_nn.pte +``` +Finally, the model is saved as a .pte file, which is the format used by ExecuTorch for deploying models to the edge. + +Now, we will run the ExecuTorch version, first run: + +```console +# Clean and configure the build system +rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake .. + +# Build the executor_runner target +cmake --build cmake-out --target executor_runner -j9 +``` + +You should see an output similar to: +```bash { output_lines = "1" } +[100%] Built target executor_runner +``` + +Now, run the executor_runner with the Model: +```console +./cmake-out/executor_runner --model_path simple_nn.pte +``` + +Expected Output: Since the model is a simple feedforward model, you can expect a tensor of shape [1, 2] + +```bash { output_lines = "1-3" } +Input tensor shape: [1, 10] +Output tensor shape: [1, 2] +Inference output: tensor([[0.5432, -0.3145]]) #will vary due to random initialization +``` + +If the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes. + diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md new file mode 100644 index 0000000000..f8cae1b21f --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md @@ -0,0 +1,63 @@ +--- +# User change +title: "Environment Setup on Host Machine" + +weight: 6 # 1 is first, 2 is second, etc. + +# Do not modify these elements +layout: "learningpathall" +--- +## Before you begin + +These instructions have been tested on: +- A GCP Arm-based Tau T2A Virtual Machine instance Running Ubuntu 22.04 LTS. +- Host machine with Ubuntu 24.04 on x86_64 architecture. +- Windows Subsystem for Linux (WSL): Windows x86_64 + +The host machine is where you will perform most of your development work, especially cross-compiling code for the target Arm devices. + +- The Ubuntu version should be `20.04 or higher`. +- If you do not have the board, the `x86_64` architecture must be used because the Corstone-300 FVP is not currently available for the Arm architecture. +- Though Executorch supports Windows via WSL, it is limited in resource. + + +## Install Executorch + +1. Follow the [Setting Up ExecuTorch guide](https://pytorch.org/executorch/stable/getting-started-setup.html ) to install it. + +2. Activate the `executorch` virtual environment from the installation guide to ensure it is ready for use: + +```console +conda activate executorch +``` + +## Install PyTorch +The latest version needs Python 3.8 or later + +```console +pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 + +``` + +## Install Edge Impulse CLI +1. Create an [Edge Impulse Account](https://studio.edgeimpulse.com/signup) if you do not have one + +2. Install the CLI tools in your terminal + +Ensure you have Nodejs installed + +```console +node -v +``` +Install the Edge Impulse CLI +```console +npm install -g edge-impulse-cli +``` +3. Install Edge Impulse Screen +```console +sudo apt install screen +``` + +## Next Steps +1. If you don't have access to the physical board: Go to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/) +2. If you have access to the board: Go to [Setup on Grove - Vision AI Module V2](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md new file mode 100644 index 0000000000..4e569e9f0a --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md @@ -0,0 +1,17 @@ +--- +# User change +title: "Environment Setup Corstone-300 FVP" + +weight: 7 # 1 is first, 2 is second, etc. + +# Do not modify these elements +layout: "learningpathall" +--- + +### Corstone-300 FVP Setup for ExecuTorch + +To install and set up the Corstone-300 FVP on your machine, refer to [Building and Running ExecuTorch with ARM Ethos-U Backend](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html). Follow this tutorial till the **"Install the TOSA reference model"** Section. It should be the last thing you do from this tutorial. + + +## Next Steps +1. Go to [Build a Simple PyTorch Model](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/examples-2.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/examples-2.md new file mode 100644 index 0000000000..ce0953f67c --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/examples-2.md @@ -0,0 +1,23 @@ +--- +title: Examples of Arm-based devices and applications +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Raspberry Pi 4 & 5 + +These affordable single-board computers are excellent for prototyping TinyML projects. They are commonly used for prototyping machine learning projects at the edge, such as in object detection and voice recognition for home automation. + +## NXP i.MX RT Microcontrollers +These are low-power microcontrollers that can handle complex TinyML tasks while maintaining energy efficiency, making them ideal for applications like wearable healthcare devices and environmental sensors. + +## STM32 Microcontrollers +Used in industrial IoT applications for predictive maintenance, these microcontrollers are energy-efficient and capable of running TinyML models for real-time anomaly detection in factory machinery. + +## Arduino Nano 33 BLE Sense +This microcontroller, equipped with a suite of sensors, supports TinyML and is ideal for small-scale IoT applications, such as detecting environmental changes and movement patterns. + +## Edge Impulse +This platform offers a suite of tools that enables developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards. diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md new file mode 100644 index 0000000000..bd8b4e268e --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md @@ -0,0 +1,57 @@ +--- +# User change +title: "Setup on Grove - Vision AI Module V2" + +weight: 8 # 1 is first, 2 is second, etc. + +# Do not modify these elements +layout: "learningpathall" +--- +## Before you begin +Only follow this part of the tutorial if you have the board. Due to its constrained environment, we'll focus on lightweight, optimized tools and models (which will be introduced in the next learning path). + + +### Compilers + +The examples can be built with [Arm Compiler for Embedded](https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded) or [Arm GNU Toolchain](https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain). + + +Use the install guides to install the compilers on your **host machine**: +- [Arm Compiler for Embedded](/install-guides/armclang/) +- [Arm GNU Toolchain](/install-guides/gcc/arm-gnu) + + +## Board Setup + +![Hardware Overview #center](Overview.png) + +Hardware overview : [Image credits](https://wiki.seeedstudio.com/grove_vision_ai_v2/). + +1. Download and extract the latest Edge Impulse firmware +Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/seeed-grove-vision-ai-module-v2.zip). + + +2. Connect the Grove - Vision AI Module V2 to your computer using the USB-C cable. + +![Board connection](Connect.png) + + +3. In the extracted Edge Impulse firmware, locate and run the installation scripts to flash your device. + +```console +./flash_linux.sh +``` + +4. Configure Edge Impulse for the board +in your terminal, run: + +```console +edge-impulse-daemon +``` +Follow the prompts to log in. + +5. If successful, you should see your Grove - Vision AI Module V2 under 'Devices' in Edge Impulse. + + +## Next Steps +1. Go to [Build a Simple PyTorch Model](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md new file mode 100644 index 0000000000..c354f4640c --- /dev/null +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md @@ -0,0 +1,18 @@ +--- +title: Troubleshooting and Best Practices +weight: 10 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- +## Troubleshooting +- If you encounter permission issues, try running the commands with sudo. +- Ensure your Grove - Vision AI Module V2 is properly connected and recognized by your computer. +- If Edge Impulse CLI fails to detect your device, try unplugging, hold the **Boot button** and replug the USB cable. Release the button once you replug. + +## Best Practices +- Always cross-compile your code on the host machine to ensure compatibility with the target Arm device. +- Utilize model quantization techniques to optimize performance on constrained devices like the Grove - Vision AI Module V2. +- Regularly update your development environment and tools to benefit from the latest improvements in TinyML and edge AI technologies + +You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Netrowk. \ No newline at end of file