From e8f5e49113e2f782b493408f1cf16eb1a05a1cdd Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Fri, 11 Oct 2024 21:55:04 +0000 Subject: [PATCH] review of TinyML Learing Path --- .../Overview-1.md | 42 ++++++- .../introduction-to-tinyml-on-arm/_index.md | 22 ++-- .../applications-4.md | 34 ------ .../benefits-3.md | 49 +++++++- .../build-model-8.md | 52 +++++--- .../env-setup-5.md | 111 ++++++++++++++---- .../env-setup-6-FVP.md | 4 +- .../examples-2.md | 23 ---- .../setup-7-Grove.md | 4 +- .../troubleshooting-6.md | 2 +- 10 files changed, 219 insertions(+), 124 deletions(-) delete mode 100644 content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/applications-4.md delete mode 100644 content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/examples-2.md diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md index 1e6831cadb..683e2ae6ca 100644 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md @@ -1,15 +1,47 @@ --- -title: Overview +title: Introduction to TinyML weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- +TinyML represents a significant shift in machine learning deployment. + +Unlike traditional machine learning, which typically depends on cloud-based servers or high-powered hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and less processing capabilities. + +TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. + +This shift opens up new possibilities for creating smarter and more efficient embedded systems. + ## Module Overview -This session delves into TinyML, which applies machine learning to devices with limited resources like microcontrollers. This module serves as a starting point for learning how cutting-edge AI technologies may be put on even the smallest of devices, making Edge AI more accessible and efficient. -Additionally, we'll cover the necessary setup on your host machine and target device to facilitate cross-compilation and ensure smooth integration across all devices. +This Learning Path is about TinyML, applying machine learning to devices with limited resources like microcontrollers. It serves as a starting point for learning how cutting-edge AI technologies may be put on even the smallest of devices, making Edge AI more accessible and efficient. + +You will learn how to setup on your host machine and target device to facilitate compilation and ensure smooth integration across all devices. + +## Examples of Arm-based devices and applications + +There are many devices you can use for TinyML projects. Some of them are listed below. + +### Raspberry Pi 4 and 5 + +Raspberry Pi single-board computers are excellent for prototyping TinyML projects. They are commonly used for prototyping machine learning projects at the edge, such as in object detection and voice recognition for home automation. + +### NXP i.MX RT microcontrollers + +NXP i.MX RT microcontrollers are low-power microcontrollers that can handle complex TinyML tasks while maintaining energy efficiency, making them ideal for applications like wearable healthcare devices and environmental sensors. + +### STM32 microcontrollers + +STM32 microcontrollers are used in industrial IoT applications for predictive maintenance. These microcontrollers are energy-efficient and capable of running TinyML models for real-time anomaly detection in factory machinery. + +### Arduino Nano 33 BLE Sense + +The Arduino Nano, equipped with a suite of sensors, supports TinyML and is ideal for small-scale IoT applications, such as detecting environmental changes and movement patterns. + +### Edge Impulse + +In addition to hardware, there are software platforms that can help you build TinyML applications. -## Introduction to TinyML -TinyML represents a significant shift in how we approach machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-powered hardware, TinyML is tailored to function on devices with limited resources, such as constrained memory, power, and processing capabilities. TinyML has quickly gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems. +Edge Impulse platform offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards. \ No newline at end of file diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md index 7cf7bcd240..3305cb70c1 100644 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md @@ -1,15 +1,15 @@ --- -title: Introduction to TinyML on Arm using PyTorch v2.x and Executorch +title: Introduction to TinyML on Arm using PyTorch and ExecuTorch minutes_to_complete: 40 -who_is_this_for: This learning module is tailored for developers, engineers, and data scientists who are new to TinyML and interested in exploring its potential for edge AI. If you have an interest in deploying machine learning models on low-power, resource-constrained devices, this course will help you get started using PyTorch v2.x and Executorch on Arm-based platforms. +who_is_this_for: This is an introductory topic for developers, engineers, and data scientists who are new to TinyML and interested in exploring its potential for edge AI. You will learn how to get started using PyTorch and ExecuTorch for TinyML. learning_objectives: - Identify TinyML and how it's different from the AI you might be used to. - Understand the benefits of deploying AI models on Arm-based edge devices. - Select Arm-based devices for TinyML. - - Identify real-world use cases demonstrating the impact of TinyML in various industries. + - Identify real-world use cases demonstrating the impact of TinyML. - Install and configure a TinyML development environment. - Set up a cross-compilation environment on your host machine. - Perform best practices for ensuring optimal performance on constrained edge devices. @@ -17,10 +17,8 @@ learning_objectives: prerequisites: - Basic knowledge of machine learning concepts. - - Understanding of IoT and embedded systems (helpful but not required). - - A Linux host machine or VM running Ubuntu 20.04 or higher, or an AWS account to use [Arm Virtual Hardware](https://www.arm.com/products/development-tools/simulation/virtual-hardware) - - Target device, phyisical or using the or Corstone-300 FVP, preferably Cortex-M boards but you can use Cortex-A7 boards as well. - + - Understanding of IoT and embedded systems. + - A Linux host machine or VM running Ubuntu 22.04 or higher. author_primary: Dominica Abena O. Amanfo @@ -35,15 +33,15 @@ operatingsystems: - Linux tools_software_languages: - - Corstone 300 FVP - - Grove - Vision AI Module V2 + - Arm Virtual Hardware + - Fixed Virtual Platform - Python - - PyTorch v2.x - - Executorch + - PyTorch + - ExecuTorch - Arm Compute Library - GCC - Edge Impulse - - Nodejs + - Node.js ### FIXED, DO NOT MODIFY # ================================================================================ diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/applications-4.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/applications-4.md deleted file mode 100644 index cef33546dc..0000000000 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/applications-4.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: Real-World Applications of TinyML with Examples of Arm-Based Solutions -weight: 5 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -TinyML is being deployed across various industries, enhancing everyday experiences and enabling groundbreaking solutions. Here are a few examples: - -## Healthcare - Wearable Heart Rate Monitors -- Arm-based microcontrollers like those in Fitbit devices run TinyML models to monitor vital signs such as heart rate, detect arrhythmias, and provide real-time feedback. - -- **Example**: Fitbit Charge 5 smart band uses the Arm 32-bit Cortex-M4 processor. - -## Agriculture - Smart Irrigation Systems -- Arm-powered microcontrollers in smart sensors help monitor soil moisture and control water usage. TinyML models process environmental data locally to optimize water distribution. -- **Example**: OpenAg uses Arm Cortex-M processors to run machine learning models on edge devices, optimizing irrigation based on real-time data. - -## Home Automation - Smart Cameras -- Arm-based processors in smart cameras can detect objects and people, triggering alerts or actions without needing to send data to the cloud, saving bandwidth and improving privacy. -- **Example**: Arlo smart cameras, powered by Arm Cortex processors, perform object detection at the edge, enhancing performance and energy efficiency. - -## Industrial IoT - Predictive Maintenance in Factories (e.g., Siemens Predictive Maintenance) -- Arm-powered industrial sensors analyze vibration patterns in machinery, running TinyML models to predict when maintenance is needed and prevent breakdowns. -- **Example**: Siemens utilizes Arm Cortex-A processors in industrial sensors for real-time data analysis, detecting faults before they cause significant downtime. They rely on Arm-based processors for their Industrial Edge computing solutions. - -## Wildlife Conservation - Smart Camera Traps (e.g., Conservation X Labs) -- Arm-based smart camera traps can identify animal movements or detect poachers using TinyML models. These energy-efficient devices can operate in remote areas without relying on external power sources. -- **Example**: Conservation X Labs uses Arm Cortex-M microcontrollers to power camera traps, helping monitor endangered species in the wild. - - - - diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/benefits-3.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/benefits-3.md index 7d4ef0c90f..066a090030 100644 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/benefits-3.md +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/benefits-3.md @@ -1,15 +1,58 @@ --- -title: Benefits of TinyML for Edge Computing on Arm Devices -weight: 4 +title: Benefits and application of TinyML for Edge Computing +weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- -The advantages of TinyML for edge computing on Arm devices are vast, particularly when paired with Arm's architecture, which is widely used in IoT, mobile devices, and edge AI deployments. Here are some key benefits: +## Benefits and applications + +The advantages of TinyML match up well with the Arm architecture, which is widely used in IoT, mobile devices, and edge AI deployments. + +Here are some key benefits of TinyML on Arm: - Power Efficiency: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones. + - Low Latency: Because the AI processing happens on-device, there's no need to send data to the cloud, reducing latency and enabling real-time decision-making. + - Data Privacy: With on-device computation, sensitive data remains local, providing enhanced privacy and security. This is particularly crucial in healthcare and personal devices. + - Cost-Effective: Arm devices, which are cost-effective and scalable, can now handle sophisticated machine learning tasks, reducing the need for expensive hardware or cloud services. + - Scalability: With billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge. + +TinyML is being deployed across multiple industries, enhancing everyday experiences and enabling groundbreaking solutions. + +Here are a few examples of TinyML applications: + +### Healthcare - Wearable Heart Rate Monitors + +- Arm-based microcontrollers like those in Fitbit devices run TinyML models to monitor vital signs such as heart rate, detect arrhythmias, and provide real-time feedback. + +- **Example**: Fitbit Charge 5 smart band uses the Arm 32-bit Cortex-M4 processor. + +### Agriculture - Smart Irrigation Systems + +- Arm-powered microcontrollers in smart sensors help monitor soil moisture and control water usage. TinyML models process environmental data locally to optimize water distribution. + +- **Example**: OpenAg uses Arm Cortex-M processors to run machine learning models on edge devices, optimizing irrigation based on real-time data. + +### Home Automation - Smart Cameras + +- Arm-based processors in smart cameras can detect objects and people, triggering alerts or actions without needing to send data to the cloud, saving bandwidth and improving privacy. + +- **Example**: Arlo smart cameras, powered by Arm Cortex processors, perform object detection at the edge, enhancing performance and energy efficiency. + +### Industrial IoT - Predictive Maintenance in Factories + +- Arm-powered industrial sensors analyze vibration patterns in machinery, running TinyML models to predict when maintenance is needed and prevent breakdowns. + +- **Example**: Siemens utilizes Arm Cortex-A processors in industrial sensors for real-time data analysis, detecting faults before they cause significant downtime. They rely on Arm-based processors for their Industrial Edge computing solutions. + +### Wildlife Conservation - Smart Camera Traps + +- Arm-based smart camera traps can identify animal movements or detect poachers using TinyML models. These energy-efficient devices can operate in remote areas without relying on external power sources. + +- **Example**: Conservation X Labs uses Arm Cortex-M microcontrollers to power camera traps, helping monitor endangered species in the wild. + diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md index 8d591f636a..80080ef9e8 100644 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md @@ -2,13 +2,17 @@ # User change title: "Build a Simple PyTorch Model" -weight: 9 # 1 is first, 2 is second, etc. +weight: 7 # 1 is first, 2 is second, etc. # Do not modify these elements layout: "learningpathall" --- -With our Environment ready, we will create a simple program to test our setup. This example will define a simple feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. Create a file called simple_nn.py with the following code: +With our Environment ready, you can create a simple program to test the setup. + +This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. + +Use a text editor to create a file named `simple_nn.py` with the following code: ```python import torch @@ -55,45 +59,55 @@ with open("simple_nn.pte", "wb") as file: print("Model successfully exported to simple_nn.pte") ``` -Run it from your terminal: +Run the model from the Linux command line: ```console python3 simple_nn.py ``` -If everything runs successfully, the output will be: -```bash { output_lines = "1" } +The output is: + +```output Model successfully exported to simple_nn.pte ``` -Finally, the model is saved as a .pte file, which is the format used by ExecuTorch for deploying models to the edge. -Now, we will run the ExecuTorch version, first run: +The model is saved as a .pte file, which is the format used by ExecuTorch for deploying models to the edge. + +Run the ExecuTorch version, first build the executable: ```console # Clean and configure the build system -rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake .. +(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..) # Build the executor_runner target -cmake --build cmake-out --target executor_runner -j9 +cmake --build cmake-out --target executor_runner -j$(nproc) ``` -You should see an output similar to: -```bash { output_lines = "1" } +You see the build output and it ends with: + +```output +[100%] Linking CXX executable executor_runner [100%] Built target executor_runner ``` -Now, run the executor_runner with the Model: +When the build is complete, run the executor_runner with the model as an argument: + ```console ./cmake-out/executor_runner --model_path simple_nn.pte ``` -Expected Output: Since the model is a simple feedforward model, you can expect a tensor of shape [1, 2] - -```bash { output_lines = "1-3" } -Input tensor shape: [1, 10] -Output tensor shape: [1, 2] -Inference output: tensor([[0.5432, -0.3145]]) #will vary due to random initialization +Since the model is a simple feedforward model, you see a tensor of shape [1, 2] + +```output +I 00:00:00.006598 executorch:executor_runner.cpp:73] Model file simple_nn.pte is loaded. +I 00:00:00.006628 executorch:executor_runner.cpp:82] Using method forward +I 00:00:00.006635 executorch:executor_runner.cpp:129] Setting up planned buffer 0, size 320. +I 00:00:00.007225 executorch:executor_runner.cpp:152] Method loaded. +I 00:00:00.007237 executorch:executor_runner.cpp:162] Inputs prepared. +I 00:00:00.012885 executorch:executor_runner.cpp:171] Model executed successfully. +I 00:00:00.012896 executorch:executor_runner.cpp:175] 1 outputs: +Output 0: tensor(sizes=[1, 2], [-0.105369, -0.178723]) ``` -If the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes. +When the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes. diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md index f8cae1b21f..f8e6f6d0a4 100644 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md @@ -2,62 +2,127 @@ # User change title: "Environment Setup on Host Machine" -weight: 6 # 1 is first, 2 is second, etc. +weight: 4 # 1 is first, 2 is second, etc. # Do not modify these elements layout: "learningpathall" --- ## Before you begin -These instructions have been tested on: -- A GCP Arm-based Tau T2A Virtual Machine instance Running Ubuntu 22.04 LTS. -- Host machine with Ubuntu 24.04 on x86_64 architecture. -- Windows Subsystem for Linux (WSL): Windows x86_64 +You will use a Linux computer to run PyTorch and ExecuTorch to prepare a TinyML model to run on edge devices. -The host machine is where you will perform most of your development work, especially cross-compiling code for the target Arm devices. +The instructions are for Ubuntu 22.04 or newer. -- The Ubuntu version should be `20.04 or higher`. -- If you do not have the board, the `x86_64` architecture must be used because the Corstone-300 FVP is not currently available for the Arm architecture. -- Though Executorch supports Windows via WSL, it is limited in resource. +You also need the [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/). If you don't have the board you can use the Corstone-300 Fixed Virtual Platform (FVP) instead. +{{% notice Note %}} +Note that the Corstone-300 FVP is not available for the Arm architecture so your host machine needs to x86_64. +{{% /notice %}} -## Install Executorch +The instructions have been tested on: +- Arm-based cloud instances running Ubuntu 22.04. +- Desktop computer with Ubuntu 24.04. +- Windows Subsystem for Linux (WSL). + +The host machine is where you will perform most of your development work, especially compiling code for the target Arm devices. -1. Follow the [Setting Up ExecuTorch guide](https://pytorch.org/executorch/stable/getting-started-setup.html ) to install it. +## Install Python -2. Activate the `executorch` virtual environment from the installation guide to ensure it is ready for use: +Python 3 is included in Ubuntu, but some additonal packages are needed. ```console -conda activate executorch +sudo apt update +sudo apt install python-is-python3 gcc g++ make -y ``` ## Install PyTorch -The latest version needs Python 3.8 or later + +Create a Python virtual environemnt using Miniconda. + +For Arm Linux: + +```console +curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh +sh ./Miniconda3-latest-Linux-aarch64.sh -b +eval "$($HOME/miniconda3/bin/conda shell.bash hook)" +conda --version +``` + +For x86_64 Linux: ```console -pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 +curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh +sh ./Miniconda3-latest-Linux-x86_64.sh -b +eval "$($HOME/miniconda3/bin/conda shell.bash hook)" +conda --version +``` + +Activate the Python virtual environment: +```bash +conda create -yn executorch python=3.10.0 +conda activate executorch +``` + +The prompt of your terminal now has (executorch) as a prefix to indicate the virtual environment is active. + + +## Install Executorch + +From within the Python virtual environment, run the commands below to download the ExecuTorch repository and install the required packages: + +``` bash +# Clone the ExecuTorch repo from GitHub +git clone --branch v0.3.0 https://github.com/pytorch/executorch.git +cd executorch + +# Update and pull submodules +git submodule sync +git submodule update --init + +# Install ExecuTorch pip package and its dependencies, as well as +# development tools like CMake. +./install_requirements.sh ``` ## Install Edge Impulse CLI -1. Create an [Edge Impulse Account](https://studio.edgeimpulse.com/signup) if you do not have one -2. Install the CLI tools in your terminal +1. Create an [Edge Impulse Account](https://studio.edgeimpulse.com/signup) and sign in. -Ensure you have Nodejs installed +2. Install the Edge Impulse CLI tools in your terminal + +The Edge Impulse CLI tools require Node.js. + +```console +sudo apt install nodejs npm -y +``` + +Confirm `node` is avilable by running: ```console node -v ``` -Install the Edge Impulse CLI + +Your version is printed, for example: + +```output +v18.19.1 +``` + +Install the Edge Impulse CLI using NPM: + ```console npm install -g edge-impulse-cli ``` -3. Install Edge Impulse Screen + +3. Install Screen to use with edge devices + ```console -sudo apt install screen +sudo apt install screen -y ``` ## Next Steps -1. If you don't have access to the physical board: Go to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/) -2. If you have access to the board: Go to [Setup on Grove - Vision AI Module V2](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file + +If you don't have the Grove AI vision board and want to use the Corstone-300 FVP proceed to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/) + +If you have the Grove board proceed o to [Setup on Grove - Vision AI Module V2](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md index 4e569e9f0a..278320fc74 100644 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md @@ -1,8 +1,8 @@ --- # User change -title: "Environment Setup Corstone-300 FVP" +title: "Set up the Corstone-300 FVP" -weight: 7 # 1 is first, 2 is second, etc. +weight: 5 # 1 is first, 2 is second, etc. # Do not modify these elements layout: "learningpathall" diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/examples-2.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/examples-2.md deleted file mode 100644 index ce0953f67c..0000000000 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/examples-2.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Examples of Arm-based devices and applications -weight: 3 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -## Raspberry Pi 4 & 5 - -These affordable single-board computers are excellent for prototyping TinyML projects. They are commonly used for prototyping machine learning projects at the edge, such as in object detection and voice recognition for home automation. - -## NXP i.MX RT Microcontrollers -These are low-power microcontrollers that can handle complex TinyML tasks while maintaining energy efficiency, making them ideal for applications like wearable healthcare devices and environmental sensors. - -## STM32 Microcontrollers -Used in industrial IoT applications for predictive maintenance, these microcontrollers are energy-efficient and capable of running TinyML models for real-time anomaly detection in factory machinery. - -## Arduino Nano 33 BLE Sense -This microcontroller, equipped with a suite of sensors, supports TinyML and is ideal for small-scale IoT applications, such as detecting environmental changes and movement patterns. - -## Edge Impulse -This platform offers a suite of tools that enables developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards. diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md index bd8b4e268e..73890438c3 100644 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md @@ -1,8 +1,8 @@ --- # User change -title: "Setup on Grove - Vision AI Module V2" +title: "Set up the Grove Vision AI Module V2" -weight: 8 # 1 is first, 2 is second, etc. +weight: 6 # 1 is first, 2 is second, etc. # Do not modify these elements layout: "learningpathall" diff --git a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md index c354f4640c..410409c2f9 100644 --- a/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md +++ b/content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md @@ -1,6 +1,6 @@ --- title: Troubleshooting and Best Practices -weight: 10 +weight: 8 ### FIXED, DO NOT MODIFY layout: learningpathall