From 98f9d79d98dc539ef0d838a5aed3f83a10628c0a Mon Sep 17 00:00:00 2001 From: RonanSynnottArm Date: Wed, 15 Jan 2025 17:21:06 +0900 Subject: [PATCH 01/50] Updated MLEK to Corstone-320 and removed AVH --- .../mlek/_index.md | 10 +- .../mlek/build.md | 75 ++--------- .../embedded-and-microcontrollers/mlek/fvp.md | 70 ++++++++++ .../embedded-and-microcontrollers/mlek/run.md | 122 +++++++++--------- 4 files changed, 152 insertions(+), 125 deletions(-) create mode 100644 content/learning-paths/embedded-and-microcontrollers/mlek/fvp.md diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md b/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md index 14729b1732..4bb9111780 100644 --- a/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md @@ -3,22 +3,22 @@ title: Build and run the Arm Machine Learning Evaluation Kit examples minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for embedded software developers interested in learning about machine learning. +who_is_this_for: This is an introductory topic for embedded software developers interested machine learning applications. learning_objectives: - Build examples from Machine Learning Evaluation Kit (MLEK) - - Run the examples on Corstone-320 FVP or Virtual Hardware + - Run the examples on Arm Ecosystem FVP prerequisites: - Some familiarity with embedded programming - - Either a Linux machine running Ubuntu, or an AWS account to use [Arm Virtual Hardware](https://www.arm.com/products/development-tools/simulation/virtual-hardware) + - Linux host machine running Ubuntu author_primary: Ronan Synnott ### RS: Learning Path hidden until AWS instance updated -draft: true +draft: false cascade: - draft: true + draft: false ### Tags diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/build.md b/content/learning-paths/embedded-and-microcontrollers/mlek/build.md index 98b0e289b8..47663fd459 100644 --- a/content/learning-paths/embedded-and-microcontrollers/mlek/build.md +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/build.md @@ -13,11 +13,7 @@ You can use the MLEK source code to build sample applications and run them on th ## Before you begin -You can use your own Ubuntu Linux host machine or use [Arm Virtual Hardware (AVH)](https://www.arm.com/products/development-tools/simulation/virtual-hardware) for this Learning Path. - -The Ubuntu version should be 20.04 or 22.04. These instructions have been tested on the `x86_64` architecture. You will need a way to interact visually with your machine to run the FVP, because it opens graphical windows for input and output from the software applications. - -If you want to use Arm Virtual Hardware the [Arm Virtual Hardware install guide](/install-guides/avh#corstone) provides setup instructions. +It is recommended to use an Ubuntu Linux host machine. The Ubuntu version should be 20.04 or 22.04. These instructions have been tested on the `x86_64` architecture. ## Build the example application @@ -52,9 +48,6 @@ You can review the installation guides for further details. {{% /notice %}} - -Both compilers are pre-installed in Arm Virtual Hardware. - ### Clone the repository Clone the ML Evaluation Kit repository, and navigate into the new directory: @@ -69,76 +62,36 @@ git submodule update --init The default build is Ethos-U55 and Corstone-300. The default build for Ethos-U85 is Corstone-320. Use the `npu-config-name` flag to set Ethos-U85. -The default compiler is `gcc`, but `armclang` can also be used. Number after `ethos-u85-*` is number of MACs, 128-2048 (2^n). +The default compiler is `gcc`, but `armclang` can also be used. Number after `ethos-u85-*` is the number of MACs, 128-2048 (2^n). + +Use `--make-jobs` to specify `make -j` value. You can select either compiler to build applications. You can also try them both and compare the results. -- Build with Arm GNU Toolchain (`gcc`) +- Build with Arm GNU Toolchain (`gcc`): ``` -./build_default.py --npu-config-name ethos-u85-256 --toolchain gnu +./build_default.py --npu-config-name ethos-u85-256 --toolchain gnu --make-jobs 8 ``` -- Build with Arm Compiler for Embedded (`armclang`) +- Build with Arm Compiler for Embedded (`armclang`): ```console -./build_default.py --npu-config-name ethos-u85-256 --toolchain arm -``` - -The build will take a few minutes. - -When the build is complete, you will find the examples (`.axf` files) in the `cmake-build-*/bin` directory. The `cmake-build` directory names are specific to the compiler used and Ethos-U85 configuration. Verify that the files have been created by observing the output of the `ls` command - -```bash -ls cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ +./build_default.py --npu-config-name ethos-u85-256 --toolchain arm --make-jobs 8 ``` -The next step is to install the FVP and run it with these example audio clips. - - -## Corstone-320 FVP {#fvp} - -This section describes installation of the Corstone-320 to run on your local machine. If you are using Arm Virtual Hardware, that comes with the Corstone-300 FVP pre-installed, and you can move on to the next section. You can review Arm's full FVP offer and general installation steps in the [Fast Model and Fixed Virtual Platform](/install-guides/fm_fvp) install guides. +{{% notice Tip %}} +Use `./build_default.py --help` for additional information. -{{% notice Note %}} -The rest of the steps for the Corstone-320 need to be run in a new terminal window. {{% /notice %}} -Open a **new terminal window** and download the Corstone-320 archive. - -```bash -cd $HOME -wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Corstone-IoT/Corstone-320/FVP_Corstone_SSE-320_11.27_25_Linux64.tgz -``` - -Unpack it with `tar`, run the setup script and export the binary paths to the `PATH` environment variable. - -```bash -tar -xf FVP_Corstone_SSE-320_11.27_25_Linux64.tgz -./FVP_Corstone_SSE-320.sh --i-agree-to-the-contained-eula --no-interactive -q -export PATH=$HOME/FVP_Corstone_SSE-320/models/Linux64_GCC-9.3:$PATH -``` - -The FVP requires an additional dependency, `libpython3.9.so.1.0`, which can be installed using a script. Note that this will tinkle with the python installation for the current terminal window, so make sure to open a new one for the next step. - -```bash -source $HOME/FVP_Corstone_SSE-320/scripts/runtime.sh -``` +The build will take a few minutes. -Verify that the FVP was successfully installed by comparing your output from below command. +When the build is complete, you will find the examples (`.axf` files) in the `cmake-build-*/bin` directory. The `cmake-build` directory names are specific to the compiler used and Ethos-U85 configuration. Verify that the files have been created by observing the output of the `ls` command ```bash -FVP_Corstone_SSE-320 -``` - -```output -telnetterminal0: Listening for serial connection on port 5000 -telnetterminal1: Listening for serial connection on port 5001 -telnetterminal2: Listening for serial connection on port 5002 -telnetterminal5: Listening for serial connection on port 5003 - +ls cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ ``` - -Now you are ready to test the application with the FVP. +The next step is to install the FVP and run the built example applications. diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/fvp.md b/content/learning-paths/embedded-and-microcontrollers/mlek/fvp.md new file mode 100644 index 0000000000..f500234481 --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/fvp.md @@ -0,0 +1,70 @@ +--- +# User change +title: "Install Arm Ecosystem FVP" + +weight: 3 # 1 is first, 2 is second, etc. + +# Do not modify these elements +layout: "learningpathall" +--- +## Corstone-320 FVP {#fvp} + +This section describes installation of the [Corstone-320 FVP](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms/IoT%20FVPs) to run on your local machine. Similar instructions would apply for other platforms. + +Arm provides a selection of free to use Fixed Virtual Platforms (FVPs) that can be downloaded from the [Arm Developer](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms#Downloads) website. + +You can review Arm's full FVP offering and general installation steps in the [Fast Model and Fixed Virtual Platform](/install-guides/fm_fvp) install guide. + +{{% notice Note %}} +It is recommended to perform these steps in a new terminal window. +{{% /notice %}} + +Download the Corstone-320 Ecosystem FVP archive: + +```bash +cd $HOME +wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Corstone-IoT/Corstone-320/FVP_Corstone_SSE-320_11.27_25_Linux64.tgz +``` + +Unpack it with `tar`, run the installation script, and add the path to the FVP executable to the `PATH` environment variable. + +```bash +tar -xf FVP_Corstone_SSE-320_11.27_25_Linux64.tgz + +./FVP_Corstone_SSE-320.sh --i-agree-to-the-contained-eula --no-interactive -q + +export PATH=$HOME/FVP_Corstone_SSE-320/models/Linux64_GCC-9.3:$PATH +``` + +The FVP requires an additional dependency, `libpython3.9.so.1.0`, which can be installed using a supplied script. + +```bash +source $HOME/FVP_Corstone_SSE-320/scripts/runtime.sh +``` + +Run the executable: + +```bash +FVP_Corstone_SSE-320 +``` + +You will observe output similar to the following: + +```output +telnetterminal0: Listening for serial connection on port 5000 +telnetterminal1: Listening for serial connection on port 5001 +telnetterminal2: Listening for serial connection on port 5002 +telnetterminal5: Listening for serial connection on port 5003 +``` + +If you encounter graphics driver errors, you can disable the development board and LCD visualization with additional command options: + +```bash +FVP_Corstone_SSE-320 \ + -C mps4_board.visualisation.disable-visualisation=1 \ + -C vis_hdlcd.disable_visualisation=1 +``` + +Stop the executable with `Ctrl+C`. + +Now you are ready to run the MLEK applications on the FVP. diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/run.md b/content/learning-paths/embedded-and-microcontrollers/mlek/run.md index 714002ad57..387dbde15e 100644 --- a/content/learning-paths/embedded-and-microcontrollers/mlek/run.md +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/run.md @@ -2,34 +2,32 @@ # User change title: "Run the examples on the FVP" -weight: 3 # 1 is first, 2 is second, etc. +weight: 4 # 1 is first, 2 is second, etc. # Do not modify these elements layout: "learningpathall" --- ## Run an example -Now you are ready to combine the FVP installation and the example application. Navigate to the evaluation kit repository. +Navigate to the evaluation kit repository. ```bash cd ml-embedded-evaluation-kit/ ``` -To run an example on the Corstone-320 FVP target, launch the FVP executable with `-a` to specify the software application. +The built examples (`.axf` files) will be located in a `cmake-*/bin` folder based on the build configuration used. -To run the key word spotting example `ethos-u-kws.axf` compiled with `gcc` use one of the two options below. +Navigate into that folder, and list the images. For example: -## Option 1: On your computer with the FVP installed +```bash +cd cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ + +ls *.axf +``` -Run the FVP. +Use `-a` to specify the application to load to the FVP. -```console -FVP_Corstone_SSE-320 \ - -C mps4_board.subsystem.ethosu.num_macs=256 \ - -C mps4_board.visualisation.disable-visualisation=1 \ - -C vis_hdlcd.disable_visualisation=1 \ - -a cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ethos-u-kws.axf -``` +Use `-C mps4_board.subsystem.ethosu.num_macs` to configure the Ethos-U component of the model. {{% notice Note %}} The number of NPU MACs specified in the build MUST match the number specified in the FVP. Else an error similar to the below will be emitted. @@ -39,81 +37,87 @@ E: NPU config mismatch. npu.macs_per_cc=E: NPU config mismatch.. ``` {{% /notice %}} -## Option 2: On Arm Virtual Hardware +You can list all available parameters by running the FVP executable with the `--list-params` option, for example: ```console -VHT_Corstone_SSE-300_Ethos-U55 -a cmake-build-mps3-sse-300-ethos-u55-128-gnu/bin/ethos-u-kws.axf +FVP_Corstone_SSE-320 --list-params > parameters.txt ``` -When the example is running, a telnet instance will open allowing you to interact with the example. -{{% notice Note %}} -It may take some time to initialize the terminal, please be patient. -If you see warnings regarding loading the image, these can likely be ignored. -{{% /notice %}} +### Run the application -## Interact with the application +```console +FVP_Corstone_SSE-320 \ + -C mps4_board.subsystem.ethosu.num_macs=256 \ + -C mps4_board.visualisation.disable-visualisation=1 \ + -C vis_hdlcd.disable_visualisation=1 \ + -a ethos-u-kws.axf +``` -Use the menu to control the application. For the key word spotting application enter 1 to classify the next audio clip. +If adding configuration options becomes cumbersome, it can be easier to specify them in a configuration file (remove the `-C` option) and then use that on the command line (`-f`). -![terminal #center](term.png) +#### config.txt +``` +mps4_board.subsystem.ethosu.num_macs=256 +mps4_board.visualisation.disable-visualisation=1 +vis_hdlcd.disable_visualisation=1 +``` -The results of the classification will appear in the visualization window of the FVP. +The command line becomes: +```console +FVP_Corstone_SSE-320 -f config.txt -a ethos-u-kws.axf +``` -The display shows a 98% chance of the audio clips sound was down. +The application executes and identifies words spoken within audio files. -![visualization #center](vis.png) +Repeat with any of the other built applications. -End the simulation by pressing Control-C in the terminal where to started the FVP. +Full instructions are provided in the evaluation kit [documentation](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit/+/HEAD/docs/quick_start.md). -You now have the ML Evaluation Kit examples running. Experiment with the different examples provided. -## Addendum: Setting model parameters +## Addendum: Speed up FVP execution -You can specify additional parameters to configure certain aspects of the simulated Corstone-300. +By default, the examples are built with Ethos-U timing enabled. This provides benchmarking information, but the result is that the FVP executes relatively slowly. -### List parameters +The build system has a macro `-DETHOS_U_NPU_TIMING_ADAPTER_ENABLED` defined to control this. -List the available parameters by running the FVP executable with the `--list-params` option, for example: +Modify the command `build_default.py` passes to `cmake` to include this setting (`OFF`). Search for `cmake_command` and modify as follows: -```console -FVP_Corstone_SSE-320 --list-params > parameters.txt +#### build_default.py ``` - -{{% notice Note %}} -If you are running with Arm Virtual Hardware substitute `VHT_Corstone_SSE-300_Ethos-U55` as the executable name. -{{% /notice %}} - -Open the file `parameters.txt` to see all of the possible parameters and the default values. - -### Set parameters - -Individual parameters can be set with the `-C` command option. - -For example, to put the Ethos-U component into fast execution mode: - -```console -FVP_Corstone_SSE-320 -a cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ethos-u-kws.axf -C mps4_board.subsystem.ethosu.extra_args="--fast" +cmake_command = ( + f"{cmake_path} -B {build_dir} -DTARGET_PLATFORM={target_platform}" + f" -DTARGET_SUBSYSTEM={target_subsystem}" + f" -DCMAKE_TOOLCHAIN_FILE={cmake_toolchain_file}" + f" -DETHOS_U_NPU_ID={ethos_u_cfg.processor_id}" + f" -DETHOS_U_NPU_CONFIG_ID={ethos_u_cfg.config_id}" + " -DTENSORFLOW_LITE_MICRO_CLEAN_DOWNLOADS=ON" + " -DETHOS_U_NPU_TIMING_ADAPTER_ENABLED=OFF" +) ``` -{{% notice Note %}} -Do not use fast execution mode whilst benchmarking performance. -{{% /notice %}} -To set multiple parameters it may be easier to list them in a text file (without `-C`) and use `-f` to specify the file. +Rebuild the applications as before, for example: +``` +./build_default.py --npu-config-name ethos-u85-256 --toolchain gnu --make-jobs 8 +``` -For example, use a text editor to create a file named `options.txt` with the contents: +Add additional configuration option (`mps4_board.subsystem.ethosu.extra_args`) to the FVP command line: -```console +#### config.txt +``` +mps4_board.subsystem.ethosu.num_macs=256 mps4_board.visualisation.disable-visualisation=1 +vis_hdlcd.disable_visualisation=1 mps4_board.subsystem.ethosu.extra_args="--fast" ``` -Run the FVP with the `-f` option and the `options.txt` file: +Run the application again, and notice how much faster execution completes. ```console -FVP_Corstone_SSE-320 -a cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ethos-u-kws.axf -f options.txt +FVP_Corstone_SSE-320 -f config.txt -a ethos-u-kws.axf ``` -Full instructions are provided in the evaluation kit [documentation](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit/+/HEAD/docs/quick_start.md). +{{% notice Note %}} +Do not use fast execution mode whilst benchmarking performance. +{{% /notice %}} -You have now run an example application on an Arm Fixed Virtual Platform. \ No newline at end of file From dc9d7b2a04bdd1a838770c093594301edb8baf37 Mon Sep 17 00:00:00 2001 From: RonanSynnottArm Date: Thu, 16 Jan 2025 10:08:05 +0900 Subject: [PATCH 02/50] Address review feedback --- .../embedded-and-microcontrollers/mlek/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md b/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md index 4bb9111780..9f32528b0f 100644 --- a/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md @@ -3,7 +3,7 @@ title: Build and run the Arm Machine Learning Evaluation Kit examples minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for embedded software developers interested machine learning applications. +who_is_this_for: This is an introductory topic for embedded software developers interested in machine learning applications. learning_objectives: - Build examples from Machine Learning Evaluation Kit (MLEK) @@ -11,7 +11,7 @@ learning_objectives: prerequisites: - Some familiarity with embedded programming - - Linux host machine running Ubuntu + - A Linux host machine running Ubuntu author_primary: Ronan Synnott From bf8829f502aabc8e394e793a5995ee663cef904d Mon Sep 17 00:00:00 2001 From: BmanClark <55798725+BmanClark@users.noreply.github.com> Date: Thu, 16 Jan 2025 11:27:56 +0000 Subject: [PATCH 03/50] Update README.md fix typo in create learning path address that was causing 404 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index fb65046964..f2934b727d 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ The Learning Paths created here are maintained by Arm and the Arm software devel All contributions are welcome as long as they relate to software development for the Arm architecture. * Write a Learning Path (or improve existing content) - * Fork this repo and submit pull requests; follow the step by step instructions in [Create a Learning Path](https://learn.arm.com//learning-paths/cross-platform/_example-learning-path/) on the website. + * Fork this repo and submit pull requests; follow the step by step instructions in [Create a Learning Path](https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/) on the website. * Ideas for a new Learning Path * Create a new GitHub idea under the [Discussions](https://github.com/ArmDeveloperEcosystem/arm-learning-paths/discussions) area in this GitHub repo. * Log a code issue (or other general issues) From 6995ee00e63468be724ca6310ff5919a2894320a Mon Sep 17 00:00:00 2001 From: Ben Clark Date: Thu, 16 Jan 2025 12:07:16 +0000 Subject: [PATCH 04/50] Adding ExecuTorch profiling instructions --- .../nn-profiling-executorch.md | 91 +++++++++++++++++++ 1 file changed, 91 insertions(+) create mode 100644 content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md diff --git a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md new file mode 100644 index 0000000000..d1705bcaab --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md @@ -0,0 +1,91 @@ +--- +title: ML Profiling of an ExecuTorch model +weight: 7 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## ExecuTorch Profiling Tools +[ExecuTorch](https://pytorch.org/executorch/stable/index.html) can be used for running PyTorch models on constrained devices like mobile. As so many models are developed in PyTorch, this is a useful way to quickly deploy them to mobile devices, without needing conversion tools like Google's [ai-edge-torch](https://github.com/google-ai-edge/ai-edge-torch) to turn them into tflite. + +To get started on ExecuTorch, you can follow the instructions on the [PyTorch website](https://pytorch.org/executorch/stable/getting-started-setup). Further, to then deploy on Android, the instructions are [here](https://pytorch.org/executorch/stable/demo-apps-android.html). If you haven't already got ExecuTorch running on Android, you should follow these instructions first. + +ExecuTorch comes with a set of profiling tools, but currently they are aimed at Linux, not Android where you will want to deploy. The instructions to profile on Linux are [here](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html), but we will look at how to adapt them for Android. + +## Profiling on Android + +To profile on Android, the steps are the same as [Linux](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html), except that we need to generate the ETDump file on an Android device. + +To start with, generate the ETRecord exactly as per the Linux instructions. + +Next, follow the instructions to create the ExecuTorch bundled program that you'll need to generate the ETDump. You'll copy this to your Android device together with the runner program you're about to compile. + +To compile the runner program you'll need to adapt the `build_example_runner.sh` script in the instructions (located in the `examples/devtools` subfolder of the ExecuTorch repository) to compile it for Android. Copy the script and rename the copy to `build_android_example_runner.sh`, ready for editing. Remove all lines with `coreml` in them, and the options dependent on it, as these are not needed for Android. + +You'll need to set the `ANDROID_NDK` environment variable to point to your Android NDK installation. At the top of the `main()` function add: + +```bash + export ANDROID_NDK=~/Android/Sdk/ndk/28.0.12674087 # replace this with the correct path for your NDK installation + export ANDROID_ABI=arm64-v8a +``` + +Next add Android options to the first `cmake` configuration line in `main()`, that configures the building of the ExecuTorch library. Change it to: + +```bash + cmake -DCMAKE_INSTALL_PREFIX=cmake-out \ + -DCMAKE_BUILD_TYPE=Release \ + -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}/build/cmake/android.toolchain.cmake" \ + -DANDROID_ABI="${ANDROID_ABI}" \ + -DEXECUTORCH_BUILD_XNNPACK=ON \ + -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \ + -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \ + -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \ + -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \ + -DEXECUTORCH_BUILD_DEVTOOLS=ON \ + -DEXECUTORCH_ENABLE_EVENT_TRACER=ON \ + -Bcmake-out . +``` + +The `cmake` build step for the ExecuTorch library stays the same, as do the next lines setting up local variables. + +Next we need to adapt the options to Android in the second `cmake` configuration line, that configures the building of the runner. This now becomes: + +```bash + cmake -DCMAKE_PREFIX_PATH="${cmake_prefix_path}" \ + -Dexecutorch_DIR="${PWD}/cmake-out/lib/cmake/ExecuTorch" -Dgflags_DIR="${PWD}/cmake-out/third-party/gflags" \ + -DCMAKE_BUILD_TYPE=Release \ + -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}/build/cmake/android.toolchain.cmake" \ + -DANDROID_ABI="${ANDROID_ABI}" \ + -B"${build_dir}" \ + "${example_dir}" +``` + +Once the configuration lines are changed, you can now run the script `./build_android_example_runner.sh` to build the runner program. Once compiled you can find the executable `example_runner` in `cmake-out/examples/devtools/`. + +Copy `example_runner` and the ExecuTorch bundled program to your Android device. Do this with adb: + +```bash +adb push example_runner /data/local/tmp/ +adb push bundled_program.bp /data/local/tmp/ +adb shell +chmod 777 /data/local/tmp/example_runner +./example_runner --bundled_program_path="bundled_program.bp" +exit +adb pull /data/local/tmp/etdump.etdp . +``` + +You now have the ETDump file ready to analyse with an ExecuTorch Inspector, as per the Linux instructions. + +To get a full display of the operators and their timings you can just do: + +```python +from executorch.devtools import Inspector + +etrecord_path = "etrecord.bin" +etdump_path = "etdump.etdp" +inspector = Inspector(etdump_path=etdump_path, etrecord=etrecord_path) +inspector.print_data_tabular() +``` + +However, as the [ExecuTorch profiling page](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html) explains, there are data analysis options available. These enable you to quickly find the slowest layer, group operators etc. Both the `EventBlock` and `DataFrame` approaches work well. However, at time of writing, the `find_total_for_module()` function has a [bug](https://github.com/pytorch/executorch/issues/7200) and returns incorrect values - hopefully this will soon be fixed. From c2d4cd683648a5e7c7642de5b2283ad23f4a3250 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Thu, 16 Jan 2025 16:28:39 +0000 Subject: [PATCH 05/50] Editorial first-pass. --- .../nn-profiling-executorch.md | 40 ++++++++++++------- 1 file changed, 25 insertions(+), 15 deletions(-) diff --git a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md index d1705bcaab..2c35a45492 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md +++ b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md @@ -7,30 +7,34 @@ layout: learningpathall --- ## ExecuTorch Profiling Tools -[ExecuTorch](https://pytorch.org/executorch/stable/index.html) can be used for running PyTorch models on constrained devices like mobile. As so many models are developed in PyTorch, this is a useful way to quickly deploy them to mobile devices, without needing conversion tools like Google's [ai-edge-torch](https://github.com/google-ai-edge/ai-edge-torch) to turn them into tflite. +You can use [ExecuTorch](https://pytorch.org/executorch/stable/index.html) for running PyTorch models on constrained devices like mobile. As so many models are developed in PyTorch, this is a useful way to quickly deploy them to mobile devices, without the requirement for conversion tools such as Google's [ai-edge-torch](https://github.com/google-ai-edge/ai-edge-torch) to convert them into tflite. -To get started on ExecuTorch, you can follow the instructions on the [PyTorch website](https://pytorch.org/executorch/stable/getting-started-setup). Further, to then deploy on Android, the instructions are [here](https://pytorch.org/executorch/stable/demo-apps-android.html). If you haven't already got ExecuTorch running on Android, you should follow these instructions first. +To get started on ExecuTorch, you can follow the instructions on the [PyTorch website](https://pytorch.org/executorch/stable/getting-started-setup). To then deploy on Android, you can also find instructions on the [Pytorch website](https://pytorch.org/executorch/stable/demo-apps-android.html). If you do not already have ExecuTorch running on Android, follow these instructions first. -ExecuTorch comes with a set of profiling tools, but currently they are aimed at Linux, not Android where you will want to deploy. The instructions to profile on Linux are [here](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html), but we will look at how to adapt them for Android. +ExecuTorch comes with a set of profiling tools, but currently they are aimed at Linux, and not Android. The instructions to profile on Linux are [here](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html), and you can adapt them for use on Android. ## Profiling on Android -To profile on Android, the steps are the same as [Linux](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html), except that we need to generate the ETDump file on an Android device. +To profile on Android, the steps are the same as for [Linux](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html), except that you need to generate the ETDump file on an Android device. -To start with, generate the ETRecord exactly as per the Linux instructions. +To start, generate the ETRecord in exactly the same way as described for the Linux instructions. -Next, follow the instructions to create the ExecuTorch bundled program that you'll need to generate the ETDump. You'll copy this to your Android device together with the runner program you're about to compile. +Next, follow the instructions to create the ExecuTorch bundled program that you will need to generate the ETDump. You will copy this to your Android device together with the runner program that you are about to compile. -To compile the runner program you'll need to adapt the `build_example_runner.sh` script in the instructions (located in the `examples/devtools` subfolder of the ExecuTorch repository) to compile it for Android. Copy the script and rename the copy to `build_android_example_runner.sh`, ready for editing. Remove all lines with `coreml` in them, and the options dependent on it, as these are not needed for Android. +To compile the runner program, you will need to adapt the `build_example_runner.sh` script in the instructions that are located in the `examples/devtools` subfolder of the ExecuTorch repository to compile it for Android. Copy the script and rename the file to `build_android_example_runner.sh`, ready for editing. Remove all lines with `coreml` in them, and the options dependent on it, as these are not needed for Android. -You'll need to set the `ANDROID_NDK` environment variable to point to your Android NDK installation. At the top of the `main()` function add: +You then need to set the `ANDROID_NDK` environment variable to point to your Android NDK installation. + +At the top of the `main()` function add: ```bash export ANDROID_NDK=~/Android/Sdk/ndk/28.0.12674087 # replace this with the correct path for your NDK installation export ANDROID_ABI=arm64-v8a ``` -Next add Android options to the first `cmake` configuration line in `main()`, that configures the building of the ExecuTorch library. Change it to: +Next, add Android options to the first `cmake` configuration line in `main()`, that configures the building of the ExecuTorch library. + +Change it to: ```bash cmake -DCMAKE_INSTALL_PREFIX=cmake-out \ @@ -49,7 +53,9 @@ Next add Android options to the first `cmake` configuration line in `main()`, th The `cmake` build step for the ExecuTorch library stays the same, as do the next lines setting up local variables. -Next we need to adapt the options to Android in the second `cmake` configuration line, that configures the building of the runner. This now becomes: +Next you will adapt the options to Android in the second `cmake` configuration line, which is the one that configures the building of the runner. + +Change it to: ```bash cmake -DCMAKE_PREFIX_PATH="${cmake_prefix_path}" \ @@ -61,9 +67,13 @@ Next we need to adapt the options to Android in the second `cmake` configuration "${example_dir}" ``` -Once the configuration lines are changed, you can now run the script `./build_android_example_runner.sh` to build the runner program. Once compiled you can find the executable `example_runner` in `cmake-out/examples/devtools/`. +Once you have changed the configuration lines, you can now run the script `./build_android_example_runner.sh` to build the runner program. + +Once compiled, find the executable `example_runner` in `cmake-out/examples/devtools/`. + +Copy `example_runner` and the ExecuTorch bundled program to your Android device. -Copy `example_runner` and the ExecuTorch bundled program to your Android device. Do this with adb: +Do this with adb: ```bash adb push example_runner /data/local/tmp/ @@ -75,9 +85,9 @@ exit adb pull /data/local/tmp/etdump.etdp . ``` -You now have the ETDump file ready to analyse with an ExecuTorch Inspector, as per the Linux instructions. +You now have the ETDump file ready to analyze with an ExecuTorch Inspector, in line with the Linux instructions. -To get a full display of the operators and their timings you can just do: +To get a full display of the operators and their timings, use the following: ```python from executorch.devtools import Inspector @@ -88,4 +98,4 @@ inspector = Inspector(etdump_path=etdump_path, etrecord=etrecord_path) inspector.print_data_tabular() ``` -However, as the [ExecuTorch profiling page](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html) explains, there are data analysis options available. These enable you to quickly find the slowest layer, group operators etc. Both the `EventBlock` and `DataFrame` approaches work well. However, at time of writing, the `find_total_for_module()` function has a [bug](https://github.com/pytorch/executorch/issues/7200) and returns incorrect values - hopefully this will soon be fixed. +However, as the [ExecuTorch profiling page](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html) explains, there are data analysis options available. These enable you to quickly find specified criteria such as the slowest layer or group operators. Both the `EventBlock` and `DataFrame` approaches work well. However, at time of writing, the `find_total_for_module()` function has a [bug](https://github.com/pytorch/executorch/issues/7200) and returns incorrect values - hopefully this will soon be fixed. From 6d0f5083aeaa88003ff5cd43fbd26d73b1f02cb1 Mon Sep 17 00:00:00 2001 From: Daniel Nguyen Date: Mon, 20 Jan 2025 19:33:09 -0600 Subject: [PATCH 06/50] refinfra: minor wording update Minor wording updates to improve readability. Adds external links for further reading. Signed-off-by: Daniel Nguyen --- .../refinfra-quick-start/_index.md | 6 ++++-- .../refinfra-quick-start/build-2.md | 4 ++-- .../refinfra-quick-start/environment-setup-1.md | 4 ++-- .../refinfra-quick-start/test-with-fvp-3.md | 2 +- 4 files changed, 9 insertions(+), 7 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/_index.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/_index.md index 54966bcf90..c0a2fbd26e 100644 --- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/_index.md @@ -11,9 +11,11 @@ learning_objectives: - Test the reference firmware stack. prerequisites: - - Some understanding of the Reference Design software stack architecture. + - Some understanding of the [Reference Design software stack architecture](https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html). + - Some understanding of the Linux command line. + - Optionally a basic understanding of Docker and containers. -author_primary: Tom Pilar +author_primary: Tom Pilar, Daniel Nguyen ### Tags skilllevels: Introductory diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/build-2.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/build-2.md index e3917bd5e3..f89f148b65 100644 --- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/build-2.md +++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/build-2.md @@ -1,5 +1,5 @@ --- -title: Build the software stack +title: Build the Software Stack weight: 3 ### FIXED, DO NOT MODIFY @@ -79,5 +79,5 @@ lrwxrwxrwx 1 ubuntu ubuntu 30 Jan 12 15:35 tf-bl31.bin -> ../components/rdn lrwxrwxrwx 1 ubuntu ubuntu 33 Jan 12 15:35 uefi.bin -> ../components/css-common/uefi.bin ``` -The `fip-uefi.bin` firmware image will contain the `TF-A BL2` boot loader image which is responsible for unpacking the rest of the firmware as well as the firmware that TF-A BL2 unpacks. This includes the `SCP BL2` (`scp_ramfw.bin`) image that is unpacked by the AP firmware and transferred over to the SCP TCMs using the SCP shared data store module. Along with the FIP image, the FVP also needs the `TF-A BL1` image and the `SCP BL1` (`scp_romfw.bin`) image files. +The `fip-uefi.bin` [firmware image package](https://trustedfirmware-a.readthedocs.io/en/v2.5/getting_started/tools-build.html) will contain the `TF-A BL2` boot loader image which is responsible for unpacking the rest of the firmware as well as the firmware that TF-A BL2 unpacks. This includes the `SCP BL2` (`scp_ramfw.bin`) image that is unpacked by the AP firmware and transferred over to the SCP TCMs using the SCP shared data store module. Along with the FIP image, the FVP also needs the `TF-A BL1` image and the `SCP BL1` (`scp_romfw.bin`) image files. diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/environment-setup-1.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/environment-setup-1.md index 3b96c15e36..0228a2b61e 100644 --- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/environment-setup-1.md +++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/environment-setup-1.md @@ -12,7 +12,7 @@ This learning path is based on the `Neoverse N2` Reference Design (`RD-N2`). ## Before you begin -You can use either an AArch64 or x86_64 host machine running Ubuntu Linux 22.04. 64GB of free disk space and 32GB of RAM is minimum requirement to sync and build the platform software stack. 48GB of RAM is recommended. +You can use either an AArch64 or x86_64 host machine running **Ubuntu Linux 22.04**. 64GB of free disk space and 32GB of RAM is minimum requirement to sync and build the platform software stack. 48GB of RAM is recommended. Follow the instructions to set up your environment using the information found at the [Neoverse RD-N2 documentation site](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdn2.html). @@ -53,7 +53,7 @@ Bug reports: https://bugs.chromium.org/p/gerrit/issues/entry?template=Repo+tool+ Create a new directory in to which you can download the source code, build the stack, and then obtain the manifest file. -To obtain the manifest, choose a tag of the platform reference firmware. [RD-INFRA-2023.09.29](https://neoverse-reference-design.docs.arm.com/en/latest/releases/RD-INFRA-2023.09.29/release_note.html) is used here. See the [release notes](https://neoverse-reference-design.docs.arm.com/en/latest/) for more information. +To obtain the manifest, choose a tag of the platform reference firmware. [RD-INFRA-2023.09.29](https://neoverse-reference-design.docs.arm.com/en/latest/releases/RD-INFRA-2023.09.29/release_note.html) is used here, although it is recommended to use the latest version available. See the [release notes](https://neoverse-reference-design.docs.arm.com/en/latest/) for more information. Specify the platform you would like with the manifest. In the [manifest repo](https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests) there are a number of available platforms. In this case, select `pinned-rdn2.xml`. diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md index 5adbb52e71..f8c0ae9b2d 100644 --- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md +++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md @@ -127,7 +127,7 @@ In your original terminal, launch the FVP using the supplied script: Observe the platform is running successfully: ![fvp terminals alt-text#center](images/uefi.png "Figure 2. FVP Terminals") -To boot into `busy-box`, use: +You can also boot into `busy-box`, using the command: ```bash ./boot.sh -p rdn2 ``` From 41ea4f4a1b1becb3082634a581b6b846a7d82aa6 Mon Sep 17 00:00:00 2001 From: Ben Clark Date: Tue, 21 Jan 2025 13:54:08 +0000 Subject: [PATCH 07/50] addition of = and using " mean that this code will work with both build.gradle and build.gradle.kts (the new kotlin script way to do gradle) --- .../profiling-ml-on-arm/app-profiling-streamline.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/app-profiling-streamline.md b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/app-profiling-streamline.md index c72893edb1..118c9176e6 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/app-profiling-streamline.md +++ b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/app-profiling-streamline.md @@ -128,8 +128,8 @@ Now add the code below to the `build.gradle` file of the Module you wish to prof ```gradle externalNativeBuild { cmake { - path file('src/main/cpp/CMakeLists.txt') - version '3.22.1' + path = file("src/main/cpp/CMakeLists.txt") + version = "3.22.1" } } ``` From a8237afac422449ed3bf737cc7ff05dcf3287aa3 Mon Sep 17 00:00:00 2001 From: Joe <4088382+JoeStech@users.noreply.github.com> Date: Wed, 22 Jan 2025 14:09:30 -0700 Subject: [PATCH 08/50] change graviton references to axion and show how to set up network rules --- .../servers-and-cloud-computing/rag/_index.md | 1 + .../servers-and-cloud-computing/rag/chatbot.md | 16 ++++++++++++++-- .../servers-and-cloud-computing/rag/rag_llm.md | 2 +- 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/rag/_index.md b/content/learning-paths/servers-and-cloud-computing/rag/_index.md index ebfe968750..123f4031ad 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/_index.md @@ -13,6 +13,7 @@ learning_objectives: - Monitor and analyze inference performance metrics. prerequisites: + - A Google Cloud Axion (or other Arm) compute instance with at least 16 cores, 8GB of RAM, and 32GB disk space. - Basic understanding of Python and ML concepts. - Familiarity with REST APIs and web services. - Basic knowledge of vector databases. diff --git a/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md b/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md index fbd872adf5..2ad984a4f5 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md @@ -7,16 +7,28 @@ layout: learningpathall ## Access the Web Application -Open the web application in your browser using either the local URL or the external URL: +Open the web application in your browser using the external URL: ```bash -http://localhost:8501 or http://75.101.253.177:8501 +http://[your instance ip]:8501 ``` {{% notice Note %}} To access the links you may need to allow inbound TCP traffic in your instance's security rules. Always review these permissions with caution as they may introduce security vulnerabilities. +For an Axion instance, this can be done as follows from the gcloud cli: + +gcloud compute firewall-rules create allow-my-ip \ + --direction=INGRESS \ + --network=default \ + --action=ALLOW \ + --rules=tcp:8501 \ + --source-ranges=[your IP]/32 \ + --target-tags=allow-my-ip + +For this to work, you must ensure that the allow-my-ip tag is present on your Axion instance. + {{% /notice %}} ## Upload a PDF File and Create a New Index diff --git a/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md b/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md index 7725d7658e..38babb632c 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md @@ -10,7 +10,7 @@ layout: "learningpathall" ## Before you begin -This learning path demonstrates how to build and deploy a Retrieval Augmented Generation (RAG) enabled chatbot using open-source Large Language Models (LLMs) optimized for Arm architecture. The chatbot processes documents, stores them in a vector database, and generates contextually-relevant responses by combining the LLM's capabilities with retrieved information. The instructions in this Learning Path have been designed for Arm servers running Ubuntu 22.04 LTS. You need an Arm server instance with at least 16 cores and 8GB of RAM to run this example. Configure disk storage up to at least 32GB. The instructions have been tested on an AWS Graviton4 r8g.16xlarge instance. +This learning path demonstrates how to build and deploy a Retrieval Augmented Generation (RAG) enabled chatbot using open-source Large Language Models (LLMs) optimized for Arm architecture. The chatbot processes documents, stores them in a vector database, and generates contextually-relevant responses by combining the LLM's capabilities with retrieved information. The instructions in this Learning Path have been designed for Arm servers running Ubuntu 22.04 LTS. You need an Arm server instance with at least 16 cores, 8GB of RAM, and a 32GB disk to run this example. The instructions have been tested on a GCP c4a-standard-64 instance. ## Overview From 90ebd54a24415ea202963d5a9e3a0845db3e1d01 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Wed, 22 Jan 2025 16:20:24 -0500 Subject: [PATCH 09/50] Update _index.md --- .../learning-paths/servers-and-cloud-computing/rag/_index.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/rag/_index.md b/content/learning-paths/servers-and-cloud-computing/rag/_index.md index 123f4031ad..d5b5eaa735 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/_index.md @@ -1,5 +1,5 @@ --- -title: Deploy a RAG-based Chatbot with llama-cpp-python using KleidiAI on Arm Servers +title: Deploy a RAG-based Chatbot with llama-cpp-python using KleidiAI on Google Axion processors minutes_to_complete: 45 @@ -35,6 +35,7 @@ operatingsystems: tools_software_languages: - Python - Streamlit + - Google Axion ### FIXED, DO NOT MODIFY # ================================================================================ From f06142f331c77a352d62937350052d4fed3f37a6 Mon Sep 17 00:00:00 2001 From: Annie Tallund Date: Thu, 23 Jan 2025 09:23:51 +0100 Subject: [PATCH 10/50] Rebase TinyML LP WIP --- .../build-model-8.md | 36 ++++++++++++++++--- .../env-setup-5.md | 2 +- .../env-setup-6-FVP.md | 4 +-- .../setup-7-Grove.md | 13 ++----- .../troubleshooting-6.md | 21 ----------- 5 files changed, 36 insertions(+), 40 deletions(-) delete mode 100644 content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md index 560ea92f0f..9a04810222 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md @@ -8,7 +8,6 @@ weight: 7 # 1 is first, 2 is second, etc. layout: "learningpathall" --- -TODO connect this part with the FVP/board? With our environment ready, you can create a simple program to test the setup. This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. @@ -62,7 +61,7 @@ print("Model successfully exported to simple_nn.pte") Run the model from the Linux command line: -```console +```bash python3 simple_nn.py ``` @@ -76,7 +75,7 @@ The model is saved as a .pte file, which is the format used by ExecuTorch for de Run the ExecuTorch version, first build the executable: -```console +```bash # Clean and configure the build system (rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..) @@ -84,7 +83,7 @@ Run the ExecuTorch version, first build the executable: cmake --build cmake-out --target executor_runner -j$(nproc) ``` -You see the build output and it ends with: +You will see the build output and it ends with: ```output [100%] Linking CXX executable executor_runner @@ -93,7 +92,7 @@ You see the build output and it ends with: When the build is complete, run the executor_runner with the model as an argument: -```console +```bash ./cmake-out/executor_runner --model_path simple_nn.pte ``` @@ -112,3 +111,30 @@ Output 0: tensor(sizes=[1, 2], [-0.105369, -0.178723]) When the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes. + + +TODO: Debug issues when running the model on the FVP, kindly ignore anything below this +## Running the model on the Corstone-300 FVP + + +Run the model using: + +```bash +FVP_Corstone_SSE-300_Ethos-U55 -a simple_nn.pte -C mps3_board.visualisation.disable-visualisation=1 +``` + +{{% notice Note %}} + +-C mps3_board.visualisation.disable-visualisation=1 disables the FVP GUI. This can speed up launch time for the FVP. + +The FVP can be terminated with Ctrl+C. +{{% /notice %}} + + + +```output + +``` + + +You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network. \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md index 4372f97265..31af1f637f 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md @@ -61,4 +61,4 @@ pkill -f buck If you don't have the Grove AI vision board, use the Corstone-300 FVP proceed to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/) -If you have the Grove board proceed o to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file +If you have the Grove board proceed to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md index f43e5d74ac..42d2d53d59 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md @@ -26,6 +26,4 @@ Test that the setup was successful by running the `run.sh` script. ./run.sh ``` -TODO connect this part to simple_nn.py part? - -You will see a number of examples run on the FVP. This means you can proceed to the next section to test your environment setup. +You will see a number of examples run on the FVP. This means you can proceed to the next section [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md index 27c9c6ff7e..9d1fbb4c58 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md @@ -35,6 +35,9 @@ Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/see ![Board connection](Connect.png) +{{% notice Note %}} +Ensure the board is properly connected and recognized by your computer. +{{% /notice %}} 3. In the extracted Edge Impulse firmware, locate and run the installation scripts to flash your device. @@ -42,16 +45,6 @@ Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/see ./flash_linux.sh ``` -4. Configure Edge Impulse for the board -in your terminal, run: - -```console -edge-impulse-daemon -``` -Follow the prompts to log in. - -5. If successful, you should see your Grove - Vision AI Module V2 under 'Devices' in Edge Impulse. - ## Next Steps 1. Go to [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md deleted file mode 100644 index 57b7585970..0000000000 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Troubleshooting and Best Practices -weight: 8 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -TODO can these be incorporated in the LP? - -## Troubleshooting -- If you encounter permission issues, try running the commands with sudo. -- Ensure your Grove - Vision AI Module V2 is properly connected and recognized by your computer. -- If Edge Impulse CLI fails to detect your device, try unplugging, hold the **Boot button** and replug the USB cable. Release the button once you replug. - -## Best Practices -- Always cross-compile your code on the host machine to ensure compatibility with the target Arm device. -- Utilize model quantization techniques to optimize performance on constrained devices like the Grove - Vision AI Module V2. -- Regularly update your development environment and tools to benefit from the latest improvements in TinyML and edge AI technologies - -You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network. \ No newline at end of file From 4aa24326c267454d45d9581806e16e043e9bb40c Mon Sep 17 00:00:00 2001 From: George Steed Date: Thu, 23 Jan 2025 11:03:00 +0000 Subject: [PATCH 11/50] sve_armie.md: Fix documentation of how to specify vector length --- .../servers-and-cloud-computing/sve/sve_armie.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/sve/sve_armie.md b/content/learning-paths/servers-and-cloud-computing/sve/sve_armie.md index baed8f4b5b..a8e769c260 100644 --- a/content/learning-paths/servers-and-cloud-computing/sve/sve_armie.md +++ b/content/learning-paths/servers-and-cloud-computing/sve/sve_armie.md @@ -80,10 +80,11 @@ Install `qemu-user` to run the example on processors which do not support SVE: ```bash { command_line="user@localhost" } sudo apt install qemu-user -y ``` -Run the example application with a vector length of 256 bits: + +Run the example application with a vector length of 256 bits, note that the vector length is specified in bytes rather than bits: ```bash { command_line="user@localhost | 2" } -qemu-aarch64 -cpu max,sve-default-vector-length=256 ./sve_add.exe +qemu-aarch64 -cpu max,sve-default-vector-length=32 ./sve_add.exe Done. ``` From 853abfee80a33308e99e7e640a94ebbae3f4e8d2 Mon Sep 17 00:00:00 2001 From: Gary Carroll Date: Thu, 16 Jan 2025 16:25:53 +0000 Subject: [PATCH 12/50] Add ACfL system package steps for Amazon Linux 2023 --- content/install-guides/acfl.md | 53 ++++++++++++++++++++++++++-------- 1 file changed, 41 insertions(+), 12 deletions(-) diff --git a/content/install-guides/acfl.md b/content/install-guides/acfl.md index cf070251b5..eeaf5fd5d2 100644 --- a/content/install-guides/acfl.md +++ b/content/install-guides/acfl.md @@ -142,18 +142,20 @@ install takes place **after** ACfL, you will no longer be able to fully uninstall ACfL. {{% /notice %}} -## Download and install using System Packages - Ubuntu Linux +## Download and install using System Packages + +### Ubuntu Linux 20.04 and 22.04 Arm Compiler for Linux is available to install with the Ubuntu system package manager `apt` command. -### Setup the ACfL package repository: +#### Set up the ACfL package repository Add the ACfL `apt` package repository to your Ubuntu 20.04 or 22.04 system: ```bash { target="ubuntu:latest" } sudo apt update -sudo apt install -y curl -source /etc/os-release +sudo apt install -y curl environment-modules python3 libc6-dev +. /etc/os-release curl "https://developer.arm.com/packages/ACfL%3A${NAME}-${VERSION_ID/%.*/}/${VERSION_CODENAME}/Release.key" | sudo tee /etc/apt/trusted.gpg.d/developer-arm-com.asc echo "deb https://developer.arm.com/packages/ACfL%3A${NAME}-${VERSION_ID/%.*/}/${VERSION_CODENAME}/ ./" | sudo tee /etc/apt/sources.list.d/developer-arm-com.list sudo apt update @@ -161,7 +163,7 @@ sudo apt update The ACfL Ubuntu package repository is now ready to use. -### Install ACfL +#### Install ACfL Download and install Arm Compiler for Linux with: @@ -169,6 +171,33 @@ Download and install Arm Compiler for Linux with: sudo apt install acfl ``` +### Amazon Linux 2023 + +Arm Compiler for Linux is available to install with either the `dnf` or `yum` system package manager. + +#### Install ACfL from the Amazon Linux 2023 package repository + +Install ACfL and prerequisites from the Amazon Linux 2023 `rpm` package repository with `dnf`: + +```bash +sudo dnf update +sudo dnf install 'dnf-command(config-manager)' procps psmisc make environment-modules +sudo dnf config-manager --add-repo https://developer.arm.com/packages/ACfL%3AAmazonLinux-2023/latest/ACfL%3AAmazonLinux-2023.repo +sudo dnf install acfl +``` + +Or using the equivalent `yum` commands: + +```bash +sudo yum update +sudo yum install 'dnf-command(config-manager)' procps psmisc make environment-modules +sudo yum config-manager --add-repo https://developer.arm.com/packages/ACfL%3AAmazonLinux-2023/latest/ACfL%3AAmazonLinux-2023.repo +sudo yum install acfl +``` + +The ACfL tools are now ready to use. + + ### Set up environment Arm Compiler for Linux uses environment modules to dynamically modify your user environment. Refer to the [Environment Modules documentation](https://lmod.readthedocs.io/en/latest/#id) for more information. @@ -178,17 +207,17 @@ Set up the environment, for example, in your `.bashrc` and add module files. #### Ubuntu Linux: ```bash { target="ubuntu:latest" } -echo "source /usr/share/modules/init/bash" >> ~/.bashrc +echo ". /usr/share/modules/init/bash" >> ~/.bashrc echo "module use /opt/arm/modulefiles" >> ~/.bashrc -source ~/.bashrc +. ~/.bashrc ``` -#### Red Hat Linux: +#### Red Hat or Amazon Linux: ```bash { target="fedora:latest" } -echo "source /usr/share/Modules/init/bash" >> ~/.bashrc +echo ". /usr/share/Modules/init/bash" >> ~/.bashrc echo "module use /opt/arm/modulefiles" >> ~/.bashrc -source ~/.bashrc +. ~/.bashrc ``` To list available modules: @@ -217,7 +246,7 @@ Arm Compiler for Linux is available with the [Spack](https://spack.io/) package See the [Arm Compiler for Linux and Arm PL now available in Spack](https://community.arm.com/arm-community-blogs/b/high-performance-computing-blog/posts/arm-compiler-for-linux-and-arm-pl-now-available-in-spack) blog for full details. -### Setup Spack +### Set up Spack Clone the Spack repository and add `bin` directory to the path: @@ -248,7 +277,7 @@ If you wish to install just the Arm Performance Libraries, use: spack install armpl-gcc ``` -### Setup environment +### Set up environment Use the commands below to set up the environment: ```console From fd8eb17592e1b8d7d8730591cdd9433d4340ed95 Mon Sep 17 00:00:00 2001 From: Luke Ireland Date: Thu, 23 Jan 2025 15:04:49 +0000 Subject: [PATCH 13/50] Add ACfL system package steps for RHEL 9 --- content/install-guides/acfl.md | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/content/install-guides/acfl.md b/content/install-guides/acfl.md index eeaf5fd5d2..4786b12d2a 100644 --- a/content/install-guides/acfl.md +++ b/content/install-guides/acfl.md @@ -197,6 +197,31 @@ sudo yum install acfl The ACfL tools are now ready to use. +### Red Hat Enterprise Linux (RHEL) 9 + +Arm Compiler for Linux is available to install with either the `dnf` or `yum` system package manager. + +#### Install ACfL from the RHEL 9 package repository + +Install ACfL and prerequisites from the RHEL 9 `rpm` package repository with `dnf`: + +```bash +sudo dnf update +sudo dnf install 'dnf-command(config-manager)' procps psmisc make environment-modules +sudo dnf config-manager --add-repo https://developer.arm.com/packages/ACfL%3ARHEL-9/standard/ACfL%3ARHEL-9.repo +sudo dnf install acfl +``` + +Or using the equivalent `yum` commands: + +```bash +sudo yum update +sudo yum install 'dnf-command(config-manager)' procps psmisc make environment-modules +sudo yum config-manager --add-repo https://developer.arm.com/packages/ACfL%3ARHEL-9/standard/ACfL%3ARHEL-9.repo +sudo yum install acfl +``` + +The ACfL tools are now ready to use. ### Set up environment From 8ea04b7d0931d609fd27517428b84c267f8b80ca Mon Sep 17 00:00:00 2001 From: David Mackenzie <93191581+dav-mac@users.noreply.github.com> Date: Thu, 23 Jan 2025 15:24:34 +0000 Subject: [PATCH 14/50] Update index.html Added disclaimer text that learning paths may contain some AI-generated content, as required by AI Office / Matthew Crawford for launch of the Learning Path Assistant tool. --- themes/arm-design-system-hugo-theme/layouts/index.html | 1 + 1 file changed, 1 insertion(+) diff --git a/themes/arm-design-system-hugo-theme/layouts/index.html b/themes/arm-design-system-hugo-theme/layouts/index.html index 8c54d7d7e4..a8ccc18647 100644 --- a/themes/arm-design-system-hugo-theme/layouts/index.html +++ b/themes/arm-design-system-hugo-theme/layouts/index.html @@ -93,6 +93,7 @@

Install Guides

All content is covered by the Creative Commons License{{partial "general-formatting/external-link.html"}}.

+

These Learning Paths may contain some AI-generated content.

From 089ab88ac83404c60693f832fd4327b3e20f5473 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 23 Jan 2025 14:40:11 -0500 Subject: [PATCH 15/50] Update _index.md --- .../learning-paths/servers-and-cloud-computing/rag/_index.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/rag/_index.md b/content/learning-paths/servers-and-cloud-computing/rag/_index.md index d5b5eaa735..3a8fca7ce0 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/_index.md @@ -21,10 +21,6 @@ prerequisites: author_primary: Nobel Chowdary Mandepudi -draft: true -cascade: - draft: true - ### Tags skilllevels: Advanced armips: From 20bf6afe8461b686bbf7e58e83bdb39db40caeff Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 23 Jan 2025 20:30:13 +0000 Subject: [PATCH 16/50] update to the learn.arm.arm href for the rag demo --- .../layouts/partials/demo-components/config-rag.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/config-rag.html b/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/config-rag.html index 0f266dce4a..369b63bc5f 100644 --- a/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/config-rag.html +++ b/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/config-rag.html @@ -22,7 +22,7 @@

RAG Vector Store Details

-

This application uses all data on learn.arm.com +

This application uses all data on learn.arm.com as the RAG dataset. The content across Learning Paths and Install Guides is segmented into labeled chunks, and vector embeddings are generated. This LLM demo references the FAISS vector store to answer your query.

From 14ca709b608accadccecccc89039a6536097055f Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Thu, 23 Jan 2025 20:59:08 +0000 Subject: [PATCH 17/50] Update pull-request template. --- .github/pull_request_template.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 72d7958a3f..67d666c1c8 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -2,7 +2,7 @@ Before submitting a pull request for a new Learning Path, please review [Create a Learning Path](https://learn.arm.com//learning-paths/cross-platform/_example-learning-path/) - [ ] I have reviewed Create a Learning Path -Please do not include any confidential information in your contribution. This includes confidential microarchitecture details and unannounced product information. No AI tool can be used to generate either content or code when creating a learning path or install guide. +Please do not include any confidential information in your contribution. This includes confidential microarchitecture details and unannounced product information. - [ ] I have checked my contribution for confidential information From 44d3c320f4bfaee28c2564d2ff8e645d6fb12743 Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Thu, 23 Jan 2025 22:46:02 +0000 Subject: [PATCH 18/50] Update gcloud install guide with installation from archive file. --- content/install-guides/gcloud.md | 61 +++++++++++++++++++++++++++++--- 1 file changed, 57 insertions(+), 4 deletions(-) diff --git a/content/install-guides/gcloud.md b/content/install-guides/gcloud.md index 5e93110f0a..7e237da388 100644 --- a/content/install-guides/gcloud.md +++ b/content/install-guides/gcloud.md @@ -11,7 +11,7 @@ minutes_to_complete: 5 author_primary: Jason Andrews multi_install: false multitool_install_part: false -official_docs: https://cloud.google.com/sdk/docs/install-sdk +official_docs: https://cloud.google.com/sdk/docs/install-sdk#deb test_images: - ubuntu:latest test_maintenance: false @@ -44,7 +44,9 @@ aarch64 If you see a different result, you are not using an Arm computer running 64-bit Linux. -## How do I download and install for Ubuntu on Arm? +## How do I download and install gcloud for Ubuntu on Arm? + +### Install gcloud using the package manager The easiest way to install `gcloud` for Ubuntu on Arm is to use the package manager. @@ -62,13 +64,64 @@ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - sudo apt-get update && sudo apt-get install google-cloud-cli -y ``` -Confirm the executable is available. +### Install gcloud using the archive file + +If you cannot use the package manager or you get a Python version error such as the one below you can use the archive file. + +```output +The following packages have unmet dependencies: + google-cloud-cli : Depends: python3 (< 3.12) but 3.12.3-0ubuntu2 is to be installed +``` + +Download the archive file and extract the contents: + +```bash { target="ubuntu:latest" } +wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-arm.tar.gz +sudo tar -xzf google-cloud-cli-linux-arm.tar.gz -C /opt +``` + +Run the installer: + +```bash { target="ubuntu:latest" } +cd /opt/google-cloud-sdk +sudo ./install.sh -q +``` + +{{% notice Note %}} +You can change the installation directory from `/opt` to a location of your choice. +{{% /notice %}} + +Add the installation directory to your search path. The installer will print the path to a script you can source to add `gcloud` to your search path. + +```output +==> Source [/opt/google-cloud-sdk/completion.bash.inc] in your profile to enable shell command completion for gcloud. +==> Source [/opt/google-cloud-sdk/path.bash.inc] in your profile to add the Google Cloud SDK command line tools to your $PATH. + +For more information on how to get started, please visit: + https://cloud.google.com/sdk/docs/quickstarts +``` + +Source the file to include `gcloud` in your search path: + +```bash { target="ubuntu:latest" } +source /opt/google-cloud-sdk/path.bash.inc +``` + +Alternatively, you can add the `bin` directory to your path by adding the line below to your `$HOME/.bashrc` file. + +```console +export PATH="/opt/google-cloud-sdk/bin:$PATH" +``` + +## Test gcloud + +Confirm the executable is available and print the version: ```bash { target="ubuntu:latest" } gcloud -v ``` -The output should be similar to: +The output is similar to: ```output Google Cloud SDK 418.0.0 From 1e96e5b61f4775917636d5b2fa85f50e4ced2c16 Mon Sep 17 00:00:00 2001 From: David Mackenzie <93191581+dav-mac@users.noreply.github.com> Date: Fri, 24 Jan 2025 09:51:58 +0000 Subject: [PATCH 19/50] Update index.html simplified wording --- themes/arm-design-system-hugo-theme/layouts/index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/themes/arm-design-system-hugo-theme/layouts/index.html b/themes/arm-design-system-hugo-theme/layouts/index.html index a8ccc18647..ba36f8a326 100644 --- a/themes/arm-design-system-hugo-theme/layouts/index.html +++ b/themes/arm-design-system-hugo-theme/layouts/index.html @@ -93,7 +93,7 @@

Install Guides

All content is covered by the Creative Commons License{{partial "general-formatting/external-link.html"}}.

-

These Learning Paths may contain some AI-generated content.

+

Learning Paths may contain AI-generated content.

From 43c78cda509cae9959b96b8abf9bc743618caad8 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Fri, 24 Jan 2025 10:58:28 +0000 Subject: [PATCH 20/50] Update _index.md --- .../laptops-and-desktops/windows_armpl/_index.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/_index.md b/content/learning-paths/laptops-and-desktops/windows_armpl/_index.md index 1b47586d0b..84abfd9dcf 100644 --- a/content/learning-paths/laptops-and-desktops/windows_armpl/_index.md +++ b/content/learning-paths/laptops-and-desktops/windows_armpl/_index.md @@ -1,20 +1,16 @@ --- title: Optimize Windows applications using Arm Performance Libraries -draft: true -cascade: - draft: true - minutes_to_complete: 60 -who_is_this_for: This is an introductory topic for software developers who want to improve computation performance of Windows on Arm applications using Arm Performance Libraries. +who_is_this_for: This is an introductory topic for software developers who want to improve the performance of Windows on Arm applications using Arm Performance Libraries. learning_objectives: - Develop Windows on Arm applications using Microsoft Visual Studio. - Utilize Arm Performance Libraries to increase application performance. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit) or Lenovo Thinkpad X13s running Windows 11. + - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit) or a Lenovo Thinkpad X13s running Windows 11. author_primary: Odin Shen From 471dbaad32589843e3c86ce726bc31a85ada3a65 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Fri, 24 Jan 2025 11:17:30 +0000 Subject: [PATCH 21/50] Update 2-multithreading.md --- .../windows_armpl/2-multithreading.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/2-multithreading.md b/content/learning-paths/laptops-and-desktops/windows_armpl/2-multithreading.md index 2d0d85c6f6..d2e082399f 100644 --- a/content/learning-paths/laptops-and-desktops/windows_armpl/2-multithreading.md +++ b/content/learning-paths/laptops-and-desktops/windows_armpl/2-multithreading.md @@ -1,5 +1,5 @@ --- -title: Build a simple numerical application and profile the performance +title: Build a Simple Numerical Application and Profile the Performance weight: 3 ### FIXED, DO NOT MODIFY @@ -10,9 +10,9 @@ layout: learningpathall This section uses an example application from GitHub to demonstrate the use of Arm Performance Libraries. -Start by installing Git using the [Git install guide](/install-guides/git-woa/) for Windows on Arm. +Start by installing Git using the [Git Install Guide](/install-guides/git-woa/) for Windows on Arm. -## Clone the example from GitHub +## Clone the Example from GitHub The example application renders a rotating 3D cube to perform the calculations on different programming options. @@ -23,16 +23,17 @@ git clone https://github.com/odincodeshen/SpinTheCubeInGDI.git ``` {{% notice Note %}} -The example repository is forked from the [original GitHub repository](https://github.com/marcpems/SpinTheCubeInGDI) and some minor modifications have been made to aid learning. +The example repository is forked from the [original GitHub repository](https://github.com/marcpems/SpinTheCubeInGDI) with some modifications for demonstration purposes to improve the learning experience. {{% /notice %}} -## Spin the cube introduction +## Spin the Cube Introduction -In Windows File Explorer, double-click `SpinTheCubeInGDI.sln` to open the project in Visual Studio. +In Windows File Explorer, double-click **SpinTheCubeInGDI.sln** to open the project in Visual Studio. -The source file `SpinTheCubeInGDI.cpp` implements a spinning cube. +The source file **SpinTheCubeInGDI.cpp** then implements a spinning cube. The four key components are: + - Shape Generation: Generates the vertices for a sphere using a golden ratio-based algorithm. - Rotation Calculation: The application uses a rotation matrix to rotate the 3D shape around the X, Y, and Z axes. The rotation angle is incremented over time, creating the animation. From b6b942c7901e66799c7433ade82c64b2d1334ba9 Mon Sep 17 00:00:00 2001 From: Zach Lasiuk Date: Fri, 24 Jan 2025 15:39:34 -0600 Subject: [PATCH 22/50] adding analytics to share & CTAs on next steps --- .../partials/learning-paths/next-steps.html | 22 +++--- .../static/js/anonymous-analytics.js | 67 ++++++++++++++----- 2 files changed, 62 insertions(+), 27 deletions(-) diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html b/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html index 5c821e9ceb..c3b1263278 100644 --- a/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html +++ b/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html @@ -55,36 +55,36 @@

Share

Share what you've learned.