diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md index 560ea92f0f..9a04810222 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md @@ -8,7 +8,6 @@ weight: 7 # 1 is first, 2 is second, etc. layout: "learningpathall" --- -TODO connect this part with the FVP/board? With our environment ready, you can create a simple program to test the setup. This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. @@ -62,7 +61,7 @@ print("Model successfully exported to simple_nn.pte") Run the model from the Linux command line: -```console +```bash python3 simple_nn.py ``` @@ -76,7 +75,7 @@ The model is saved as a .pte file, which is the format used by ExecuTorch for de Run the ExecuTorch version, first build the executable: -```console +```bash # Clean and configure the build system (rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..) @@ -84,7 +83,7 @@ Run the ExecuTorch version, first build the executable: cmake --build cmake-out --target executor_runner -j$(nproc) ``` -You see the build output and it ends with: +You will see the build output and it ends with: ```output [100%] Linking CXX executable executor_runner @@ -93,7 +92,7 @@ You see the build output and it ends with: When the build is complete, run the executor_runner with the model as an argument: -```console +```bash ./cmake-out/executor_runner --model_path simple_nn.pte ``` @@ -112,3 +111,30 @@ Output 0: tensor(sizes=[1, 2], [-0.105369, -0.178723]) When the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes. + + +TODO: Debug issues when running the model on the FVP, kindly ignore anything below this +## Running the model on the Corstone-300 FVP + + +Run the model using: + +```bash +FVP_Corstone_SSE-300_Ethos-U55 -a simple_nn.pte -C mps3_board.visualisation.disable-visualisation=1 +``` + +{{% notice Note %}} + +-C mps3_board.visualisation.disable-visualisation=1 disables the FVP GUI. This can speed up launch time for the FVP. + +The FVP can be terminated with Ctrl+C. +{{% /notice %}} + + + +```output + +``` + + +You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network. \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md index 4372f97265..31af1f637f 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md @@ -61,4 +61,4 @@ pkill -f buck If you don't have the Grove AI vision board, use the Corstone-300 FVP proceed to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/) -If you have the Grove board proceed o to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file +If you have the Grove board proceed to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md index f43e5d74ac..42d2d53d59 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md @@ -26,6 +26,4 @@ Test that the setup was successful by running the `run.sh` script. ./run.sh ``` -TODO connect this part to simple_nn.py part? - -You will see a number of examples run on the FVP. This means you can proceed to the next section to test your environment setup. +You will see a number of examples run on the FVP. This means you can proceed to the next section [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md index 27c9c6ff7e..9d1fbb4c58 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md @@ -35,6 +35,9 @@ Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/see ![Board connection](Connect.png) +{{% notice Note %}} +Ensure the board is properly connected and recognized by your computer. +{{% /notice %}} 3. In the extracted Edge Impulse firmware, locate and run the installation scripts to flash your device. @@ -42,16 +45,6 @@ Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/see ./flash_linux.sh ``` -4. Configure Edge Impulse for the board -in your terminal, run: - -```console -edge-impulse-daemon -``` -Follow the prompts to log in. - -5. If successful, you should see your Grove - Vision AI Module V2 under 'Devices' in Edge Impulse. - ## Next Steps 1. Go to [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md deleted file mode 100644 index 57b7585970..0000000000 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Troubleshooting and Best Practices -weight: 8 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -TODO can these be incorporated in the LP? - -## Troubleshooting -- If you encounter permission issues, try running the commands with sudo. -- Ensure your Grove - Vision AI Module V2 is properly connected and recognized by your computer. -- If Edge Impulse CLI fails to detect your device, try unplugging, hold the **Boot button** and replug the USB cable. Release the button once you replug. - -## Best Practices -- Always cross-compile your code on the host machine to ensure compatibility with the target Arm device. -- Utilize model quantization techniques to optimize performance on constrained devices like the Grove - Vision AI Module V2. -- Regularly update your development environment and tools to benefit from the latest improvements in TinyML and edge AI technologies - -You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network. \ No newline at end of file