diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 72d7958a3f..67d666c1c8 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -2,7 +2,7 @@ Before submitting a pull request for a new Learning Path, please review [Create a Learning Path](https://learn.arm.com//learning-paths/cross-platform/_example-learning-path/) - [ ] I have reviewed Create a Learning Path -Please do not include any confidential information in your contribution. This includes confidential microarchitecture details and unannounced product information. No AI tool can be used to generate either content or code when creating a learning path or install guide. +Please do not include any confidential information in your contribution. This includes confidential microarchitecture details and unannounced product information. - [ ] I have checked my contribution for confidential information diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml index 2e8c95a066..b227b7609a 100644 --- a/.github/workflows/deploy.yml +++ b/.github/workflows/deploy.yml @@ -26,7 +26,7 @@ env: jobs: build_and_deploy: # The type of runner that the job will run on - runs-on: ubuntu-latest + runs-on: ubuntu-24.04-arm permissions: id-token: write contents: read @@ -59,7 +59,7 @@ jobs: run: | hugo --minify cp learn-image-sitemap.xml public/learn-image-sitemap.xml - bin/pagefind --site "public" + bin/pagefind.aarch64 --site "public" env: HUGO_LLM_API: ${{ secrets.HUGO_LLM_API }} HUGO_RAG_API: ${{ secrets.HUGO_RAG_API }} diff --git a/.wordlist.txt b/.wordlist.txt index b9e309625b..3fb98cba04 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -3558,4 +3558,17 @@ threadCount threadNum useAPL vvenc -workspaces \ No newline at end of file +workspaces +ETDump +ETRecord +FAISS +IVI +PDFs +Powertrain +SpinTheCubeInGDI +TaaS +cloudsdk +highcpu +proj +sln +uploader \ No newline at end of file diff --git a/README.md b/README.md index fb65046964..f2934b727d 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ The Learning Paths created here are maintained by Arm and the Arm software devel All contributions are welcome as long as they relate to software development for the Arm architecture. * Write a Learning Path (or improve existing content) - * Fork this repo and submit pull requests; follow the step by step instructions in [Create a Learning Path](https://learn.arm.com//learning-paths/cross-platform/_example-learning-path/) on the website. + * Fork this repo and submit pull requests; follow the step by step instructions in [Create a Learning Path](https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/) on the website. * Ideas for a new Learning Path * Create a new GitHub idea under the [Discussions](https://github.com/ArmDeveloperEcosystem/arm-learning-paths/discussions) area in this GitHub repo. * Log a code issue (or other general issues) diff --git a/content/install-guides/acfl.md b/content/install-guides/acfl.md index cf070251b5..4786b12d2a 100644 --- a/content/install-guides/acfl.md +++ b/content/install-guides/acfl.md @@ -142,18 +142,20 @@ install takes place **after** ACfL, you will no longer be able to fully uninstall ACfL. {{% /notice %}} -## Download and install using System Packages - Ubuntu Linux +## Download and install using System Packages + +### Ubuntu Linux 20.04 and 22.04 Arm Compiler for Linux is available to install with the Ubuntu system package manager `apt` command. -### Setup the ACfL package repository: +#### Set up the ACfL package repository Add the ACfL `apt` package repository to your Ubuntu 20.04 or 22.04 system: ```bash { target="ubuntu:latest" } sudo apt update -sudo apt install -y curl -source /etc/os-release +sudo apt install -y curl environment-modules python3 libc6-dev +. /etc/os-release curl "https://developer.arm.com/packages/ACfL%3A${NAME}-${VERSION_ID/%.*/}/${VERSION_CODENAME}/Release.key" | sudo tee /etc/apt/trusted.gpg.d/developer-arm-com.asc echo "deb https://developer.arm.com/packages/ACfL%3A${NAME}-${VERSION_ID/%.*/}/${VERSION_CODENAME}/ ./" | sudo tee /etc/apt/sources.list.d/developer-arm-com.list sudo apt update @@ -161,7 +163,7 @@ sudo apt update The ACfL Ubuntu package repository is now ready to use. -### Install ACfL +#### Install ACfL Download and install Arm Compiler for Linux with: @@ -169,6 +171,58 @@ Download and install Arm Compiler for Linux with: sudo apt install acfl ``` +### Amazon Linux 2023 + +Arm Compiler for Linux is available to install with either the `dnf` or `yum` system package manager. + +#### Install ACfL from the Amazon Linux 2023 package repository + +Install ACfL and prerequisites from the Amazon Linux 2023 `rpm` package repository with `dnf`: + +```bash +sudo dnf update +sudo dnf install 'dnf-command(config-manager)' procps psmisc make environment-modules +sudo dnf config-manager --add-repo https://developer.arm.com/packages/ACfL%3AAmazonLinux-2023/latest/ACfL%3AAmazonLinux-2023.repo +sudo dnf install acfl +``` + +Or using the equivalent `yum` commands: + +```bash +sudo yum update +sudo yum install 'dnf-command(config-manager)' procps psmisc make environment-modules +sudo yum config-manager --add-repo https://developer.arm.com/packages/ACfL%3AAmazonLinux-2023/latest/ACfL%3AAmazonLinux-2023.repo +sudo yum install acfl +``` + +The ACfL tools are now ready to use. + +### Red Hat Enterprise Linux (RHEL) 9 + +Arm Compiler for Linux is available to install with either the `dnf` or `yum` system package manager. + +#### Install ACfL from the RHEL 9 package repository + +Install ACfL and prerequisites from the RHEL 9 `rpm` package repository with `dnf`: + +```bash +sudo dnf update +sudo dnf install 'dnf-command(config-manager)' procps psmisc make environment-modules +sudo dnf config-manager --add-repo https://developer.arm.com/packages/ACfL%3ARHEL-9/standard/ACfL%3ARHEL-9.repo +sudo dnf install acfl +``` + +Or using the equivalent `yum` commands: + +```bash +sudo yum update +sudo yum install 'dnf-command(config-manager)' procps psmisc make environment-modules +sudo yum config-manager --add-repo https://developer.arm.com/packages/ACfL%3ARHEL-9/standard/ACfL%3ARHEL-9.repo +sudo yum install acfl +``` + +The ACfL tools are now ready to use. + ### Set up environment Arm Compiler for Linux uses environment modules to dynamically modify your user environment. Refer to the [Environment Modules documentation](https://lmod.readthedocs.io/en/latest/#id) for more information. @@ -178,17 +232,17 @@ Set up the environment, for example, in your `.bashrc` and add module files. #### Ubuntu Linux: ```bash { target="ubuntu:latest" } -echo "source /usr/share/modules/init/bash" >> ~/.bashrc +echo ". /usr/share/modules/init/bash" >> ~/.bashrc echo "module use /opt/arm/modulefiles" >> ~/.bashrc -source ~/.bashrc +. ~/.bashrc ``` -#### Red Hat Linux: +#### Red Hat or Amazon Linux: ```bash { target="fedora:latest" } -echo "source /usr/share/Modules/init/bash" >> ~/.bashrc +echo ". /usr/share/Modules/init/bash" >> ~/.bashrc echo "module use /opt/arm/modulefiles" >> ~/.bashrc -source ~/.bashrc +. ~/.bashrc ``` To list available modules: @@ -217,7 +271,7 @@ Arm Compiler for Linux is available with the [Spack](https://spack.io/) package See the [Arm Compiler for Linux and Arm PL now available in Spack](https://community.arm.com/arm-community-blogs/b/high-performance-computing-blog/posts/arm-compiler-for-linux-and-arm-pl-now-available-in-spack) blog for full details. -### Setup Spack +### Set up Spack Clone the Spack repository and add `bin` directory to the path: @@ -248,7 +302,7 @@ If you wish to install just the Arm Performance Libraries, use: spack install armpl-gcc ``` -### Setup environment +### Set up environment Use the commands below to set up the environment: ```console diff --git a/content/install-guides/cmake.md b/content/install-guides/cmake.md index 96ff5081bb..4244a743e2 100644 --- a/content/install-guides/cmake.md +++ b/content/install-guides/cmake.md @@ -34,7 +34,7 @@ This article provides quick instructions to install CMake for Arm Linux distribu ### How do I download and install CMake for Windows on Arm? -Confirm you are using a Windows on Arm device such as Windows Dev Kit 2023 or a laptop such as Lenovo ThinkPad X13s or Surface Pro 9 with 5G. +Confirm you are using a Windows on Arm device such as the Lenovo ThinkPad X13s or Surface Pro 9 with 5G. ### How do I download and install CMake for Arm Linux distributions? diff --git a/content/install-guides/gcloud.md b/content/install-guides/gcloud.md index 5e93110f0a..7e237da388 100644 --- a/content/install-guides/gcloud.md +++ b/content/install-guides/gcloud.md @@ -11,7 +11,7 @@ minutes_to_complete: 5 author_primary: Jason Andrews multi_install: false multitool_install_part: false -official_docs: https://cloud.google.com/sdk/docs/install-sdk +official_docs: https://cloud.google.com/sdk/docs/install-sdk#deb test_images: - ubuntu:latest test_maintenance: false @@ -44,7 +44,9 @@ aarch64 If you see a different result, you are not using an Arm computer running 64-bit Linux. -## How do I download and install for Ubuntu on Arm? +## How do I download and install gcloud for Ubuntu on Arm? + +### Install gcloud using the package manager The easiest way to install `gcloud` for Ubuntu on Arm is to use the package manager. @@ -62,13 +64,64 @@ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - sudo apt-get update && sudo apt-get install google-cloud-cli -y ``` -Confirm the executable is available. +### Install gcloud using the archive file + +If you cannot use the package manager or you get a Python version error such as the one below you can use the archive file. + +```output +The following packages have unmet dependencies: + google-cloud-cli : Depends: python3 (< 3.12) but 3.12.3-0ubuntu2 is to be installed +``` + +Download the archive file and extract the contents: + +```bash { target="ubuntu:latest" } +wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-arm.tar.gz +sudo tar -xzf google-cloud-cli-linux-arm.tar.gz -C /opt +``` + +Run the installer: + +```bash { target="ubuntu:latest" } +cd /opt/google-cloud-sdk +sudo ./install.sh -q +``` + +{{% notice Note %}} +You can change the installation directory from `/opt` to a location of your choice. +{{% /notice %}} + +Add the installation directory to your search path. The installer will print the path to a script you can source to add `gcloud` to your search path. + +```output +==> Source [/opt/google-cloud-sdk/completion.bash.inc] in your profile to enable shell command completion for gcloud. +==> Source [/opt/google-cloud-sdk/path.bash.inc] in your profile to add the Google Cloud SDK command line tools to your $PATH. + +For more information on how to get started, please visit: + https://cloud.google.com/sdk/docs/quickstarts +``` + +Source the file to include `gcloud` in your search path: + +```bash { target="ubuntu:latest" } +source /opt/google-cloud-sdk/path.bash.inc +``` + +Alternatively, you can add the `bin` directory to your path by adding the line below to your `$HOME/.bashrc` file. + +```console +export PATH="/opt/google-cloud-sdk/bin:$PATH" +``` + +## Test gcloud + +Confirm the executable is available and print the version: ```bash { target="ubuntu:latest" } gcloud -v ``` -The output should be similar to: +The output is similar to: ```output Google Cloud SDK 418.0.0 diff --git a/content/install-guides/windows-perf-vs-extension.md b/content/install-guides/windows-perf-vs-extension.md index 79d0dd668a..ca7f7df0e1 100644 --- a/content/install-guides/windows-perf-vs-extension.md +++ b/content/install-guides/windows-perf-vs-extension.md @@ -41,13 +41,13 @@ The WindowsPerf GUI extension is composed of several key features, each designed - **Output Logging**: All commands executed through the GUI are logged, ensuring transparency and supporting performance analysis. - **Sampling UI**: Customize your sampling experience by selecting events, setting frequency and duration, choosing programs for sampling, and comprehensively analyzing results. See screenshot below. -![Sampling preview #center](../_images/wperf-vs-extension-sampling-preview.png "Sampling settings UI Overview") +![Sampling preview #center](/install_guides/_images/wperf-vs-extension-sampling-preview.png "Sampling settings UI Overview") - **Counting Settings UI**: Build a `wperf stat` command from scratch using the configuration interface, then view the output in the IDE or open it with Windows Performance Analyzer (WPA). See screenshot below. -![Counting preview #center](../_images/wperf-vs-extension-counting-preview.png "Counting settings UI Overview") +![Counting preview #center](/install_guides/_images/wperf-vs-extension-counting-preview.png "Counting settings UI Overview") ## Before you begin @@ -69,7 +69,7 @@ To install the WindowsPerf Visual Studio Extension from Visual Studio: 4. Click on the search bar (Ctrl+L) and type `WindowsPerf`. 5. Click on the **Install** button and restart Visual Studio. -![WindowsPerf install page #center](../_images/wperf-vs-extension-install-page.png) +![WindowsPerf install page #center](/install_guides/_images/wperf-vs-extension-install-page.png) ### Installation from GitHub diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md index f9dcd7cf00..d57c28a609 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md @@ -6,12 +6,12 @@ weight: 2 layout: learningpathall --- -This Learning Path is about TinyML. It serves as a starting point for learning how cutting-edge AI technologies may be put on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to setup on your host machine and target device to facilitate compilation and ensure smooth integration across all devices. +This Learning Path is about TinyML. It serves as a starting point for learning how cutting-edge AI technologies may be used on even the smallest devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine and target device to facilitate compilation and ensure smooth integration across devices. In this section, you get an overview of the domain with real-life use-cases and available devices. ## Overview -TinyML represents a significant shift in machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-powered hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and less processing capabilities. TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems. +TinyML represents a significant shift in machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and less processing capabilities. TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems. ### Benefits and applications @@ -42,7 +42,7 @@ TinyML is being deployed across multiple industries, enhancing everyday experien ### Examples of Arm-based devices -There are many Arm-based off-the-shelf devices you can use for TinyML projects. Some of them are listed below, but the list is not exhaustive. +There are many Arm-based devices you can use for TinyML projects. Some of them are listed below, but the list is not exhaustive. #### Raspberry Pi 4 and 5 @@ -64,6 +64,6 @@ The Arduino Nano, equipped with a suite of sensors, supports TinyML and is ideal In addition to hardware, there are software platforms that can help you build TinyML applications. -Edge Impulse platform offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards. +Edge Impulse offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards. Now that you have an overview of the subject, move on to the next section where you will set up an environment on your host machine. \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md index 50be10a4e9..020c254f54 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md @@ -14,13 +14,13 @@ learning_objectives: - Understand the benefits of deploying AI models on Arm-based edge devices. - Select Arm-based devices for TinyML. - Install and configure a TinyML development environment. - - Perform best practices for ensuring optimal performance on constrained edge devices. + - Apply best practices for ensuring optimal performance on constrained edge devices. prerequisites: - Basic knowledge of machine learning concepts. - A Linux host machine or VM running Ubuntu 22.04 or higher. - - A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) **or** an Arm license to run the Corstone-300 Fixed Virtual Platform (FVP). + - A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) or an Arm license to run the Corstone-300 Fixed Virtual Platform (FVP). author_primary: Dominica Abena O. Amanfo diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md index bd83caede5..4406277e64 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md @@ -9,6 +9,14 @@ further_reading: title: TinyML Brings AI to Smallest Arm Devices link: https://newsroom.arm.com/blog/tinyml type: blog + - resource: + title: Arm Compiler for Embedded + link: https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded + type: documentation + - resource: + title: Arm GNU Toolchain + link: https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain + type: documentation diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md index 560ea92f0f..ebd7042ba6 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md @@ -1,6 +1,6 @@ --- # User change -title: "Build a Simple PyTorch Model" +title: "Build a simple PyTorch model" weight: 7 # 1 is first, 2 is second, etc. @@ -8,8 +8,7 @@ weight: 7 # 1 is first, 2 is second, etc. layout: "learningpathall" --- -TODO connect this part with the FVP/board? -With our environment ready, you can create a simple program to test the setup. +With the development environment ready, you can create a simple PyTorch model to test the setup. This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. @@ -62,7 +61,7 @@ print("Model successfully exported to simple_nn.pte") Run the model from the Linux command line: -```console +```bash python3 simple_nn.py ``` @@ -76,7 +75,7 @@ The model is saved as a .pte file, which is the format used by ExecuTorch for de Run the ExecuTorch version, first build the executable: -```console +```bash # Clean and configure the build system (rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..) @@ -84,7 +83,7 @@ Run the ExecuTorch version, first build the executable: cmake --build cmake-out --target executor_runner -j$(nproc) ``` -You see the build output and it ends with: +You will see the build output and it ends with: ```output [100%] Linking CXX executable executor_runner @@ -93,7 +92,7 @@ You see the build output and it ends with: When the build is complete, run the executor_runner with the model as an argument: -```console +```bash ./cmake-out/executor_runner --model_path simple_nn.pte ``` @@ -112,3 +111,30 @@ Output 0: tensor(sizes=[1, 2], [-0.105369, -0.178723]) When the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes. + + +TODO: Debug issues when running the model on the FVP, kindly ignore anything below this +## Running the model on the Corstone-300 FVP + + +Run the model using: + +```bash +FVP_Corstone_SSE-300_Ethos-U55 -a simple_nn.pte -C mps3_board.visualisation.disable-visualisation=1 +``` + +{{% notice Note %}} + +-C mps3_board.visualisation.disable-visualisation=1 disables the FVP GUI. This can speed up launch time for the FVP. + +The FVP can be terminated with Ctrl+C. +{{% /notice %}} + + + +```output + +``` + + +You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network. \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md index 4372f97265..b364289cb0 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md @@ -8,7 +8,7 @@ weight: 3 layout: "learningpathall" --- -In this section, you will prepare a development environment to compile the model. These instructions have been tested on Ubuntu 22.04, 24.04 and on Windows Subsystem for Linux (WSL). +In this section, you will prepare a development environment to compile a machine learning model. These instructions have been tested on Ubuntu 22.04, 24.04 and on Windows Subsystem for Linux (WSL). ## Install dependencies @@ -27,7 +27,7 @@ Create a Python virtual environment using `python venv`. python3 -m venv $HOME/executorch-venv source $HOME/executorch-venv/bin/activate ``` -The prompt of your terminal now has (executorch) as a prefix to indicate the virtual environment is active. +The prompt of your terminal now has `(executorch)` as a prefix to indicate the virtual environment is active. ## Install Executorch @@ -40,11 +40,11 @@ git clone https://github.com/pytorch/executorch.git cd executorch ``` -Run a few commands to set up the ExecuTorch internal dependencies. +Run the commands below to set up the ExecuTorch internal dependencies. + ```bash git submodule sync git submodule update --init - ./install_requirements.sh ``` @@ -59,6 +59,8 @@ pkill -f buck ## Next Steps -If you don't have the Grove AI vision board, use the Corstone-300 FVP proceed to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/) +Your next steps depends on the hardware you have. + +If you have the Grove Vision AI Module proceed to [Set up the Grove Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/). -If you have the Grove board proceed o to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/) \ No newline at end of file +If you don't have the Grove Vision AI Module, you can use the Corstone-300 FVP instead, proceed to [Set up the Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/). diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md index f43e5d74ac..95c04d7397 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md @@ -10,22 +10,26 @@ layout: "learningpathall" ## Corstone-300 FVP Setup for ExecuTorch -Navigate to the Arm examples directory in the ExecuTorch repository. +Navigate to the Arm examples directory in the ExecuTorch repository and configure the Fixed Virtual Platform (FVP). + ```bash cd $HOME/executorch/examples/arm ./setup.sh --i-agree-to-the-contained-eula ``` +Set the environment variables for the FVP. + ```bash export FVP_PATH=${pwd}/ethos-u-scratch/FVP-corstone300/models/Linux64_GCC-9.3 export PATH=$FVP_PATH:$PATH ``` -Test that the setup was successful by running the `run.sh` script. + +Confirm the installation was successful by running the `run.sh` script. ```bash ./run.sh ``` -TODO connect this part to simple_nn.py part? +You will see a number of examples run on the FVP. -You will see a number of examples run on the FVP. This means you can proceed to the next section to test your environment setup. +This confirms the installation, and you can proceed to the next section [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/). \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md index 27c9c6ff7e..c3dbc2b581 100644 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md +++ b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md @@ -8,18 +8,16 @@ weight: 6 # 1 is first, 2 is second, etc. layout: "learningpathall" --- ## Before you begin -Only follow this part of the tutorial if you have the board. Due to its constrained environment, we'll focus on lightweight, optimized tools and models (which will be introduced in the next learning path). +This section requires the Grove Vision AI Module. Due to its constrained environment, we'll focus on lightweight, optimized tools and models. ### Compilers -The examples can be built with [Arm Compiler for Embedded](https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded) or [Arm GNU Toolchain](https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain). +The examples can be built with Arm Compiler for Embedded or Arm GNU Toolchain. - -Use the install guides to install the compilers on your **host machine**: +Use the install guides to install each compiler on your host machine: - [Arm Compiler for Embedded](/install-guides/armclang/) -- [Arm GNU Toolchain](/install-guides/gcc/arm-gnu) - +- [Arm GNU Toolchain](/install-guides/gcc/arm-gnu/) ## Board Setup @@ -30,28 +28,18 @@ Hardware overview : [Image credits](https://wiki.seeedstudio.com/grove_vision_ai 1. Download and extract the latest Edge Impulse firmware Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/seeed-grove-vision-ai-module-v2.zip). - 2. Connect the Grove - Vision AI Module V2 to your computer using the USB-C cable. ![Board connection](Connect.png) +{{% notice Note %}} +Ensure the board is properly connected and recognized by your computer. +{{% /notice %}} -3. In the extracted Edge Impulse firmware, locate and run the installation scripts to flash your device. - -```console -./flash_linux.sh -``` - -4. Configure Edge Impulse for the board -in your terminal, run: - -```console -edge-impulse-daemon -``` -Follow the prompts to log in. - -5. If successful, you should see your Grove - Vision AI Module V2 under 'Devices' in Edge Impulse. +3. In the extracted Edge Impulse firmware, locate and run the `flash_linux.sh` script to flash your device. + ```console + ./flash_linux.sh + ``` -## Next Steps -1. Go to [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup. +Continue to the next page to build a simple PyTorch model. diff --git a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md b/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md deleted file mode 100644 index 57b7585970..0000000000 --- a/content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Troubleshooting and Best Practices -weight: 8 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -TODO can these be incorporated in the LP? - -## Troubleshooting -- If you encounter permission issues, try running the commands with sudo. -- Ensure your Grove - Vision AI Module V2 is properly connected and recognized by your computer. -- If Edge Impulse CLI fails to detect your device, try unplugging, hold the **Boot button** and replug the USB cable. Release the button once you replug. - -## Best Practices -- Always cross-compile your code on the host machine to ensure compatibility with the target Arm device. -- Utilize model quantization techniques to optimize performance on constrained devices like the Grove - Vision AI Module V2. -- Regularly update your development environment and tools to benefit from the latest improvements in TinyML and edge AI technologies - -You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network. \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md b/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md index 14729b1732..9f32528b0f 100644 --- a/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/_index.md @@ -3,22 +3,22 @@ title: Build and run the Arm Machine Learning Evaluation Kit examples minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for embedded software developers interested in learning about machine learning. +who_is_this_for: This is an introductory topic for embedded software developers interested in machine learning applications. learning_objectives: - Build examples from Machine Learning Evaluation Kit (MLEK) - - Run the examples on Corstone-320 FVP or Virtual Hardware + - Run the examples on Arm Ecosystem FVP prerequisites: - Some familiarity with embedded programming - - Either a Linux machine running Ubuntu, or an AWS account to use [Arm Virtual Hardware](https://www.arm.com/products/development-tools/simulation/virtual-hardware) + - A Linux host machine running Ubuntu author_primary: Ronan Synnott ### RS: Learning Path hidden until AWS instance updated -draft: true +draft: false cascade: - draft: true + draft: false ### Tags diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/build.md b/content/learning-paths/embedded-and-microcontrollers/mlek/build.md index 98b0e289b8..47663fd459 100644 --- a/content/learning-paths/embedded-and-microcontrollers/mlek/build.md +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/build.md @@ -13,11 +13,7 @@ You can use the MLEK source code to build sample applications and run them on th ## Before you begin -You can use your own Ubuntu Linux host machine or use [Arm Virtual Hardware (AVH)](https://www.arm.com/products/development-tools/simulation/virtual-hardware) for this Learning Path. - -The Ubuntu version should be 20.04 or 22.04. These instructions have been tested on the `x86_64` architecture. You will need a way to interact visually with your machine to run the FVP, because it opens graphical windows for input and output from the software applications. - -If you want to use Arm Virtual Hardware the [Arm Virtual Hardware install guide](/install-guides/avh#corstone) provides setup instructions. +It is recommended to use an Ubuntu Linux host machine. The Ubuntu version should be 20.04 or 22.04. These instructions have been tested on the `x86_64` architecture. ## Build the example application @@ -52,9 +48,6 @@ You can review the installation guides for further details. {{% /notice %}} - -Both compilers are pre-installed in Arm Virtual Hardware. - ### Clone the repository Clone the ML Evaluation Kit repository, and navigate into the new directory: @@ -69,76 +62,36 @@ git submodule update --init The default build is Ethos-U55 and Corstone-300. The default build for Ethos-U85 is Corstone-320. Use the `npu-config-name` flag to set Ethos-U85. -The default compiler is `gcc`, but `armclang` can also be used. Number after `ethos-u85-*` is number of MACs, 128-2048 (2^n). +The default compiler is `gcc`, but `armclang` can also be used. Number after `ethos-u85-*` is the number of MACs, 128-2048 (2^n). + +Use `--make-jobs` to specify `make -j` value. You can select either compiler to build applications. You can also try them both and compare the results. -- Build with Arm GNU Toolchain (`gcc`) +- Build with Arm GNU Toolchain (`gcc`): ``` -./build_default.py --npu-config-name ethos-u85-256 --toolchain gnu +./build_default.py --npu-config-name ethos-u85-256 --toolchain gnu --make-jobs 8 ``` -- Build with Arm Compiler for Embedded (`armclang`) +- Build with Arm Compiler for Embedded (`armclang`): ```console -./build_default.py --npu-config-name ethos-u85-256 --toolchain arm -``` - -The build will take a few minutes. - -When the build is complete, you will find the examples (`.axf` files) in the `cmake-build-*/bin` directory. The `cmake-build` directory names are specific to the compiler used and Ethos-U85 configuration. Verify that the files have been created by observing the output of the `ls` command - -```bash -ls cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ +./build_default.py --npu-config-name ethos-u85-256 --toolchain arm --make-jobs 8 ``` -The next step is to install the FVP and run it with these example audio clips. - - -## Corstone-320 FVP {#fvp} - -This section describes installation of the Corstone-320 to run on your local machine. If you are using Arm Virtual Hardware, that comes with the Corstone-300 FVP pre-installed, and you can move on to the next section. You can review Arm's full FVP offer and general installation steps in the [Fast Model and Fixed Virtual Platform](/install-guides/fm_fvp) install guides. +{{% notice Tip %}} +Use `./build_default.py --help` for additional information. -{{% notice Note %}} -The rest of the steps for the Corstone-320 need to be run in a new terminal window. {{% /notice %}} -Open a **new terminal window** and download the Corstone-320 archive. - -```bash -cd $HOME -wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Corstone-IoT/Corstone-320/FVP_Corstone_SSE-320_11.27_25_Linux64.tgz -``` - -Unpack it with `tar`, run the setup script and export the binary paths to the `PATH` environment variable. - -```bash -tar -xf FVP_Corstone_SSE-320_11.27_25_Linux64.tgz -./FVP_Corstone_SSE-320.sh --i-agree-to-the-contained-eula --no-interactive -q -export PATH=$HOME/FVP_Corstone_SSE-320/models/Linux64_GCC-9.3:$PATH -``` - -The FVP requires an additional dependency, `libpython3.9.so.1.0`, which can be installed using a script. Note that this will tinkle with the python installation for the current terminal window, so make sure to open a new one for the next step. - -```bash -source $HOME/FVP_Corstone_SSE-320/scripts/runtime.sh -``` +The build will take a few minutes. -Verify that the FVP was successfully installed by comparing your output from below command. +When the build is complete, you will find the examples (`.axf` files) in the `cmake-build-*/bin` directory. The `cmake-build` directory names are specific to the compiler used and Ethos-U85 configuration. Verify that the files have been created by observing the output of the `ls` command ```bash -FVP_Corstone_SSE-320 -``` - -```output -telnetterminal0: Listening for serial connection on port 5000 -telnetterminal1: Listening for serial connection on port 5001 -telnetterminal2: Listening for serial connection on port 5002 -telnetterminal5: Listening for serial connection on port 5003 - +ls cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ ``` - -Now you are ready to test the application with the FVP. +The next step is to install the FVP and run the built example applications. diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/fvp.md b/content/learning-paths/embedded-and-microcontrollers/mlek/fvp.md new file mode 100644 index 0000000000..f500234481 --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/fvp.md @@ -0,0 +1,70 @@ +--- +# User change +title: "Install Arm Ecosystem FVP" + +weight: 3 # 1 is first, 2 is second, etc. + +# Do not modify these elements +layout: "learningpathall" +--- +## Corstone-320 FVP {#fvp} + +This section describes installation of the [Corstone-320 FVP](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms/IoT%20FVPs) to run on your local machine. Similar instructions would apply for other platforms. + +Arm provides a selection of free to use Fixed Virtual Platforms (FVPs) that can be downloaded from the [Arm Developer](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms#Downloads) website. + +You can review Arm's full FVP offering and general installation steps in the [Fast Model and Fixed Virtual Platform](/install-guides/fm_fvp) install guide. + +{{% notice Note %}} +It is recommended to perform these steps in a new terminal window. +{{% /notice %}} + +Download the Corstone-320 Ecosystem FVP archive: + +```bash +cd $HOME +wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Corstone-IoT/Corstone-320/FVP_Corstone_SSE-320_11.27_25_Linux64.tgz +``` + +Unpack it with `tar`, run the installation script, and add the path to the FVP executable to the `PATH` environment variable. + +```bash +tar -xf FVP_Corstone_SSE-320_11.27_25_Linux64.tgz + +./FVP_Corstone_SSE-320.sh --i-agree-to-the-contained-eula --no-interactive -q + +export PATH=$HOME/FVP_Corstone_SSE-320/models/Linux64_GCC-9.3:$PATH +``` + +The FVP requires an additional dependency, `libpython3.9.so.1.0`, which can be installed using a supplied script. + +```bash +source $HOME/FVP_Corstone_SSE-320/scripts/runtime.sh +``` + +Run the executable: + +```bash +FVP_Corstone_SSE-320 +``` + +You will observe output similar to the following: + +```output +telnetterminal0: Listening for serial connection on port 5000 +telnetterminal1: Listening for serial connection on port 5001 +telnetterminal2: Listening for serial connection on port 5002 +telnetterminal5: Listening for serial connection on port 5003 +``` + +If you encounter graphics driver errors, you can disable the development board and LCD visualization with additional command options: + +```bash +FVP_Corstone_SSE-320 \ + -C mps4_board.visualisation.disable-visualisation=1 \ + -C vis_hdlcd.disable_visualisation=1 +``` + +Stop the executable with `Ctrl+C`. + +Now you are ready to run the MLEK applications on the FVP. diff --git a/content/learning-paths/embedded-and-microcontrollers/mlek/run.md b/content/learning-paths/embedded-and-microcontrollers/mlek/run.md index 714002ad57..387dbde15e 100644 --- a/content/learning-paths/embedded-and-microcontrollers/mlek/run.md +++ b/content/learning-paths/embedded-and-microcontrollers/mlek/run.md @@ -2,34 +2,32 @@ # User change title: "Run the examples on the FVP" -weight: 3 # 1 is first, 2 is second, etc. +weight: 4 # 1 is first, 2 is second, etc. # Do not modify these elements layout: "learningpathall" --- ## Run an example -Now you are ready to combine the FVP installation and the example application. Navigate to the evaluation kit repository. +Navigate to the evaluation kit repository. ```bash cd ml-embedded-evaluation-kit/ ``` -To run an example on the Corstone-320 FVP target, launch the FVP executable with `-a` to specify the software application. +The built examples (`.axf` files) will be located in a `cmake-*/bin` folder based on the build configuration used. -To run the key word spotting example `ethos-u-kws.axf` compiled with `gcc` use one of the two options below. +Navigate into that folder, and list the images. For example: -## Option 1: On your computer with the FVP installed +```bash +cd cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ + +ls *.axf +``` -Run the FVP. +Use `-a` to specify the application to load to the FVP. -```console -FVP_Corstone_SSE-320 \ - -C mps4_board.subsystem.ethosu.num_macs=256 \ - -C mps4_board.visualisation.disable-visualisation=1 \ - -C vis_hdlcd.disable_visualisation=1 \ - -a cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ethos-u-kws.axf -``` +Use `-C mps4_board.subsystem.ethosu.num_macs` to configure the Ethos-U component of the model. {{% notice Note %}} The number of NPU MACs specified in the build MUST match the number specified in the FVP. Else an error similar to the below will be emitted. @@ -39,81 +37,87 @@ E: NPU config mismatch. npu.macs_per_cc=E: NPU config mismatch.. ``` {{% /notice %}} -## Option 2: On Arm Virtual Hardware +You can list all available parameters by running the FVP executable with the `--list-params` option, for example: ```console -VHT_Corstone_SSE-300_Ethos-U55 -a cmake-build-mps3-sse-300-ethos-u55-128-gnu/bin/ethos-u-kws.axf +FVP_Corstone_SSE-320 --list-params > parameters.txt ``` -When the example is running, a telnet instance will open allowing you to interact with the example. -{{% notice Note %}} -It may take some time to initialize the terminal, please be patient. -If you see warnings regarding loading the image, these can likely be ignored. -{{% /notice %}} +### Run the application -## Interact with the application +```console +FVP_Corstone_SSE-320 \ + -C mps4_board.subsystem.ethosu.num_macs=256 \ + -C mps4_board.visualisation.disable-visualisation=1 \ + -C vis_hdlcd.disable_visualisation=1 \ + -a ethos-u-kws.axf +``` -Use the menu to control the application. For the key word spotting application enter 1 to classify the next audio clip. +If adding configuration options becomes cumbersome, it can be easier to specify them in a configuration file (remove the `-C` option) and then use that on the command line (`-f`). -![terminal #center](term.png) +#### config.txt +``` +mps4_board.subsystem.ethosu.num_macs=256 +mps4_board.visualisation.disable-visualisation=1 +vis_hdlcd.disable_visualisation=1 +``` -The results of the classification will appear in the visualization window of the FVP. +The command line becomes: +```console +FVP_Corstone_SSE-320 -f config.txt -a ethos-u-kws.axf +``` -The display shows a 98% chance of the audio clips sound was down. +The application executes and identifies words spoken within audio files. -![visualization #center](vis.png) +Repeat with any of the other built applications. -End the simulation by pressing Control-C in the terminal where to started the FVP. +Full instructions are provided in the evaluation kit [documentation](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit/+/HEAD/docs/quick_start.md). -You now have the ML Evaluation Kit examples running. Experiment with the different examples provided. -## Addendum: Setting model parameters +## Addendum: Speed up FVP execution -You can specify additional parameters to configure certain aspects of the simulated Corstone-300. +By default, the examples are built with Ethos-U timing enabled. This provides benchmarking information, but the result is that the FVP executes relatively slowly. -### List parameters +The build system has a macro `-DETHOS_U_NPU_TIMING_ADAPTER_ENABLED` defined to control this. -List the available parameters by running the FVP executable with the `--list-params` option, for example: +Modify the command `build_default.py` passes to `cmake` to include this setting (`OFF`). Search for `cmake_command` and modify as follows: -```console -FVP_Corstone_SSE-320 --list-params > parameters.txt +#### build_default.py ``` - -{{% notice Note %}} -If you are running with Arm Virtual Hardware substitute `VHT_Corstone_SSE-300_Ethos-U55` as the executable name. -{{% /notice %}} - -Open the file `parameters.txt` to see all of the possible parameters and the default values. - -### Set parameters - -Individual parameters can be set with the `-C` command option. - -For example, to put the Ethos-U component into fast execution mode: - -```console -FVP_Corstone_SSE-320 -a cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ethos-u-kws.axf -C mps4_board.subsystem.ethosu.extra_args="--fast" +cmake_command = ( + f"{cmake_path} -B {build_dir} -DTARGET_PLATFORM={target_platform}" + f" -DTARGET_SUBSYSTEM={target_subsystem}" + f" -DCMAKE_TOOLCHAIN_FILE={cmake_toolchain_file}" + f" -DETHOS_U_NPU_ID={ethos_u_cfg.processor_id}" + f" -DETHOS_U_NPU_CONFIG_ID={ethos_u_cfg.config_id}" + " -DTENSORFLOW_LITE_MICRO_CLEAN_DOWNLOADS=ON" + " -DETHOS_U_NPU_TIMING_ADAPTER_ENABLED=OFF" +) ``` -{{% notice Note %}} -Do not use fast execution mode whilst benchmarking performance. -{{% /notice %}} -To set multiple parameters it may be easier to list them in a text file (without `-C`) and use `-f` to specify the file. +Rebuild the applications as before, for example: +``` +./build_default.py --npu-config-name ethos-u85-256 --toolchain gnu --make-jobs 8 +``` -For example, use a text editor to create a file named `options.txt` with the contents: +Add additional configuration option (`mps4_board.subsystem.ethosu.extra_args`) to the FVP command line: -```console +#### config.txt +``` +mps4_board.subsystem.ethosu.num_macs=256 mps4_board.visualisation.disable-visualisation=1 +vis_hdlcd.disable_visualisation=1 mps4_board.subsystem.ethosu.extra_args="--fast" ``` -Run the FVP with the `-f` option and the `options.txt` file: +Run the application again, and notice how much faster execution completes. ```console -FVP_Corstone_SSE-320 -a cmake-build-mps4-sse-320-ethos-u85-256-gnu/bin/ethos-u-kws.axf -f options.txt +FVP_Corstone_SSE-320 -f config.txt -a ethos-u-kws.axf ``` -Full instructions are provided in the evaluation kit [documentation](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit/+/HEAD/docs/quick_start.md). +{{% notice Note %}} +Do not use fast execution mode whilst benchmarking performance. +{{% /notice %}} -You have now run an example application on an Arm Fixed Virtual Platform. \ No newline at end of file diff --git a/content/learning-paths/laptops-and-desktops/electron/_index.md b/content/learning-paths/laptops-and-desktops/electron/_index.md index fb1cf24b3c..722b428115 100644 --- a/content/learning-paths/laptops-and-desktops/electron/_index.md +++ b/content/learning-paths/laptops-and-desktops/electron/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Learn how to create a multi platform build of the application prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm[virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Node.js for Arm64. You can find the installer [here](https://nodejs.org/dist/v20.10.0/node-v20.10.0-arm64.msi) - Any code editor; we recommend using [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user). diff --git a/content/learning-paths/laptops-and-desktops/hyper-v/_index.md b/content/learning-paths/laptops-and-desktops/hyper-v/_index.md index 6e941b0f00..fbb8c97645 100644 --- a/content/learning-paths/laptops-and-desktops/hyper-v/_index.md +++ b/content/learning-paths/laptops-and-desktops/hyper-v/_index.md @@ -6,10 +6,10 @@ minutes_to_complete: 60 who_is_this_for: This is an introductory topic for software developers who want to use Linux virtual machines with Windows on Arm devices. learning_objectives: - - Create Arm-based Linux virtual machines using Hyper-V + - Create Arm-based Linux virtual machines using Hyper-V. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit) or Lenovo Thinkpad X13s running Windows 11 with [Hyper-V](/install-guides/hyper-v/) installed + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 with [Hyper-V](/install-guides/hyper-v/) installed. author_primary: Jason Andrews diff --git a/content/learning-paths/laptops-and-desktops/intro/_next-steps.md b/content/learning-paths/laptops-and-desktops/intro/_next-steps.md index 725150edc8..36921f55b4 100644 --- a/content/learning-paths/laptops-and-desktops/intro/_next-steps.md +++ b/content/learning-paths/laptops-and-desktops/intro/_next-steps.md @@ -15,10 +15,6 @@ recommended_path: "/learning-paths/laptops-and-desktops/wsl2/" # General online references (type: website) further_reading: - - resource: - title: Windows Dev Kit 2023 - link: https://learn.microsoft.com/en-us/windows/arm/dev-kit/ - type: website - resource: title: All Chromebooks with Arm Processors link: https://www.linuxmadesimple.info/2019/08/all-chromebooks-with-arm-processors-in.html diff --git a/content/learning-paths/laptops-and-desktops/intro/find-hardware.md b/content/learning-paths/laptops-and-desktops/intro/find-hardware.md index 88b3d7eb93..eb441bba39 100644 --- a/content/learning-paths/laptops-and-desktops/intro/find-hardware.md +++ b/content/learning-paths/laptops-and-desktops/intro/find-hardware.md @@ -10,14 +10,13 @@ Desktops and laptops, based on the Arm architecture, are available with differen ### Windows -[Windows Dev Kit 2023](https://www.microsoft.com/en-us/d/windows-dev-kit-2023/94k0p67w7581) is for software developers creating Windows applications for Arm. - -Windows on Arm laptops can also be used for software development. +Windows on Arm laptops are available for software development. Some examples include: - [Lenovo ThinkPad X13s](https://www.lenovo.com/us/en/p/laptops/thinkpad/thinkpadx/thinkpad-x13s-(13-inch-snapdragon)/len101t0019) - [Surface Pro 9 with 5G](https://www.microsoft.com/en-us/d/surface-pro-9/93vkd8np4fvk) - [Dell Inspiron 14](https://www.dell.com/en-us/shop/dell-laptops/inspiron-14-laptop/spd/inspiron-14-3420-laptop) +There are many other Windows on Arm laptops available. ### ChromeOS diff --git a/content/learning-paths/laptops-and-desktops/llvm_putty/_index.md b/content/learning-paths/laptops-and-desktops/llvm_putty/_index.md index b13a322945..9831faeb47 100644 --- a/content/learning-paths/laptops-and-desktops/llvm_putty/_index.md +++ b/content/learning-paths/laptops-and-desktops/llvm_putty/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Build open-source PuTTY application for Windows on Arm using the native LLVM toolchain prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or Windows on Arm[virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). author_primary: Pareena Verma diff --git a/content/learning-paths/laptops-and-desktops/win_arm64ec/_index.md b/content/learning-paths/laptops-and-desktops/win_arm64ec/_index.md index 8d2aea5ae4..2a66ec4ad5 100644 --- a/content/learning-paths/laptops-and-desktops/win_arm64ec/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_arm64ec/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Compare the performance of a simple application using different build configurations prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit) or Lenovo Thinkpad X13s running Windows 11. + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11. author_primary: Pareena Verma diff --git a/content/learning-paths/laptops-and-desktops/win_arm64ec_porting/_index.md b/content/learning-paths/laptops-and-desktops/win_arm64ec_porting/_index.md index 5cdd096926..442765f585 100644 --- a/content/learning-paths/laptops-and-desktops/win_arm64ec_porting/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_arm64ec_porting/_index.md @@ -11,7 +11,7 @@ learning_objectives: - Learn how to port the C/C++ based dependencies to Arm64 using Arm64EC prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. - Visual Studio 2022 with Arm build tools. [Refer to this guide for the installation steps](https://developer.arm.com/documentation/102528/0100/Install-Visual-Studio). diff --git a/content/learning-paths/laptops-and-desktops/win_arm_qt/_index.md b/content/learning-paths/laptops-and-desktops/win_arm_qt/_index.md index 6ded4880e2..2d6cc1f5dc 100644 --- a/content/learning-paths/laptops-and-desktops/win_arm_qt/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_arm_qt/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Investigate performance improvements gained by running on Arm64 prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - '[Qt framework](https://www.qt.io/) or [Qt for Open Source Development](https://www.qt.io/download-open-source)' author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/win_asp_net8/_index.md b/content/learning-paths/laptops-and-desktops/win_asp_net8/_index.md index 0b81d98733..7aec200879 100644 --- a/content/learning-paths/laptops-and-desktops/win_asp_net8/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_asp_net8/_index.md @@ -11,7 +11,7 @@ learning_objectives: - Create and use services using the dependency injection prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - .NET 8 SDK for [arm64](https://dotnet.microsoft.com/en-us/download/dotnet/thank-you/sdk-8.0.100-windows-arm64-installer). - Any code editor, we recommend using [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user). diff --git a/content/learning-paths/laptops-and-desktops/win_asp_net8/how-to-1.md b/content/learning-paths/laptops-and-desktops/win_asp_net8/how-to-1.md index ccc2da6f55..364af22804 100644 --- a/content/learning-paths/laptops-and-desktops/win_asp_net8/how-to-1.md +++ b/content/learning-paths/laptops-and-desktops/win_asp_net8/how-to-1.md @@ -12,7 +12,7 @@ ASP.NET Core is a cross-platform framework for building web applications that le Windows 11 can run directly on Arm64-powered devices, so you can use it similarly to Windows 10 IoT Core to develop IoT apps. For example, you can use ASP.NET Core to build a web API that your headless IoT device exposes to communicate with users or other devices. -This learning path demonstrates how you can use ASP.NET Core with Windows 11 to build a web server for a headless IoT application. This learning path uses Windows Dev Kit 2023 as a development PC. The kit does not contain any real sensors, so you will implement a temperature sensor emulator. +This Learning Path demonstrates how you can use ASP.NET Core with Windows 11 to build a web server for a headless IoT application and implement a temperature sensor emulator. ## Before you begin Make sure that .NET is correctly installed on your machine. To do this, open the command prompt and type: diff --git a/content/learning-paths/laptops-and-desktops/win_aws_iot/_index.md b/content/learning-paths/laptops-and-desktops/win_aws_iot/_index.md index 2296fb792a..bff9e620e9 100644 --- a/content/learning-paths/laptops-and-desktops/win_aws_iot/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_aws_iot/_index.md @@ -11,7 +11,7 @@ learning_objectives: - Send data from a device to AWS IoT Core. prerequisites: - - A Windows-on-Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11, or a Windows-on-Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows-on-Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows-on-Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. Visual Studio Code is suitable. author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/win_aws_iot_dynamodb/_index.md b/content/learning-paths/laptops-and-desktops/win_aws_iot_dynamodb/_index.md index ef71f06e9f..424741952e 100644 --- a/content/learning-paths/laptops-and-desktops/win_aws_iot_dynamodb/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_aws_iot_dynamodb/_index.md @@ -11,7 +11,7 @@ learning_objectives: - Be able to create the rule that parses messages from AWS IoT Core and writes them to DynamoDB. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11, or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. - Completion of the [Create IoT applications with Windows on Arm and AWS IoT Core](/learning-paths/laptops-and-desktops/win_aws_iot/) Learning Path. diff --git a/content/learning-paths/laptops-and-desktops/win_aws_iot_lambda/_index.md b/content/learning-paths/laptops-and-desktops/win_aws_iot_lambda/_index.md index beae6d7c51..ab9b71a33b 100644 --- a/content/learning-paths/laptops-and-desktops/win_aws_iot_lambda/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_aws_iot_lambda/_index.md @@ -12,7 +12,7 @@ learning_objectives: - Describe the notification services in AWS. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11, or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the a Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. - Completion of the [Create IoT applications with Windows on Arm and AWS IoT Core](/learning-paths/laptops-and-desktops/win_aws_iot/) Learning Path. diff --git a/content/learning-paths/laptops-and-desktops/win_aws_iot_lambda_dynamodb/_index.md b/content/learning-paths/laptops-and-desktops/win_aws_iot_lambda_dynamodb/_index.md index 6d4ba6b9de..8eda0f9f63 100644 --- a/content/learning-paths/laptops-and-desktops/win_aws_iot_lambda_dynamodb/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_aws_iot_lambda_dynamodb/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Learn how to work with DynamoDB to scan and aggregate records. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11, or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. - Completion of the [Create IoT applications with Windows on Arm and AWS IoT Core](/learning-paths/laptops-and-desktops/win_aws_iot/) Learning Path. diff --git a/content/learning-paths/laptops-and-desktops/win_aws_iot_s3/_index.md b/content/learning-paths/laptops-and-desktops/win_aws_iot_s3/_index.md index 39c7e5429b..f655faa370 100644 --- a/content/learning-paths/laptops-and-desktops/win_aws_iot_s3/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_aws_iot_s3/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Create a static website that interacts with AWS Lambda. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. - Completion of the [Use AWS Lambda for IoT applications](/learning-paths/laptops-and-desktops/win_aws_iot_lambda/) Learning Path. diff --git a/content/learning-paths/laptops-and-desktops/win_cef/_index.md b/content/learning-paths/laptops-and-desktops/win_cef/_index.md index e469910582..f1e59edd01 100644 --- a/content/learning-paths/laptops-and-desktops/win_cef/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_cef/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Modify and style the application prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm[virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Visual Studio 2022. author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/win_forms/_index.md b/content/learning-paths/laptops-and-desktops/win_forms/_index.md index 98727ee3ee..05a9505da0 100644 --- a/content/learning-paths/laptops-and-desktops/win_forms/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_forms/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Measure code execution performance on Arm64 prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm[virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Visual Studio 2022 with .NET Desktop Development workload author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/win_net/_index.md b/content/learning-paths/laptops-and-desktops/win_net/_index.md index e464730bb0..bdedb085c8 100644 --- a/content/learning-paths/laptops-and-desktops/win_net/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_net/_index.md @@ -9,7 +9,7 @@ learning_objectives: - Build and run a .NET 6 Windows Presentation Foundation (WPF) application on a Windows on Arm machine prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or Windows on Arm[virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). author_primary: Pareena Verma diff --git a/content/learning-paths/laptops-and-desktops/win_net8/_index.md b/content/learning-paths/laptops-and-desktops/win_net8/_index.md index fad4c66be5..4a40326f1a 100644 --- a/content/learning-paths/laptops-and-desktops/win_net8/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_net8/_index.md @@ -11,7 +11,7 @@ learning_objectives: - Implement custom performance benchmarks prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm[virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - .NET 8 SDK for [x64](https://dotnet.microsoft.com/en-us/download/dotnet/thank-you/sdk-8.0.100-windows-x64-installer) and [arm64](https://dotnet.microsoft.com/en-us/download/dotnet/thank-you/sdk-8.0.100-windows-arm64-installer). - Any code editor, we recommend using [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user). diff --git a/content/learning-paths/laptops-and-desktops/win_net_maui/_index.md b/content/learning-paths/laptops-and-desktops/win_net_maui/_index.md index c396bee3b4..1a3d68ab0d 100644 --- a/content/learning-paths/laptops-and-desktops/win_net_maui/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_net_maui/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Measure code execution performance uplift on Arm64 prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Visual Studio 2022 with .NET Multi-platform App UI development and Universal Windows Platform development installed. author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/win_python/_index.md b/content/learning-paths/laptops-and-desktops/win_python/_index.md index 6f12598b8c..1e7fe20de3 100644 --- a/content/learning-paths/laptops-and-desktops/win_python/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_python/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Leverage native Arm64 for Python applications prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor, we recommend using [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user). - Visual Studio 2022 with Arm build tools. [Refer to this guide for the installation steps](https://developer.arm.com/documentation/102528/0100/Install-Visual-Studio) diff --git a/content/learning-paths/laptops-and-desktops/win_sandbox_dot_net_cicd/_index.md b/content/learning-paths/laptops-and-desktops/win_sandbox_dot_net_cicd/_index.md index 71a7c75f78..1efe933120 100644 --- a/content/learning-paths/laptops-and-desktops/win_sandbox_dot_net_cicd/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_sandbox_dot_net_cicd/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Build and run a .NET 8 Windows Presentation Foundation (WPF) application using a self-hosted GitHub Actions runner in your CI/CD workflow. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit) or Lenovo Thinkpad X13s running Windows 11 Version 22H2 which has [Windows Sandbox enabled](/install-guides/windows-sandbox-woa). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 Version 22H2 which has [Windows Sandbox enabled](/install-guides/windows-sandbox-woa). - A valid [GitHub account](https://github.com/) to complete this Learning Path. diff --git a/content/learning-paths/laptops-and-desktops/win_win32_dll_porting/_index.md b/content/learning-paths/laptops-and-desktops/win_win32_dll_porting/_index.md index 915a5cc82f..f6fc245c22 100644 --- a/content/learning-paths/laptops-and-desktops/win_win32_dll_porting/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_win32_dll_porting/_index.md @@ -11,7 +11,7 @@ learning_objectives: - Learn how to port the C/C++ Win32 DLL to Arm64 prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Refer to [Visual Studio 2022 with Arm build tools](/install-guides/vs-woa). author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/win_winui3/_index.md b/content/learning-paths/laptops-and-desktops/win_winui3/_index.md index 9f71c58cda..194934300f 100644 --- a/content/learning-paths/laptops-and-desktops/win_winui3/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_winui3/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Measure code execution performance on Arm64 prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm[virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Visual Studio 2022 with .NET desktop development and Universal Windows Platform development installed. author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/win_wpf/_index.md b/content/learning-paths/laptops-and-desktops/win_wpf/_index.md index ff21bb3aeb..c3f238e7d8 100644 --- a/content/learning-paths/laptops-and-desktops/win_wpf/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_wpf/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Measure code execution performance uplift on Arm64 prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Visual Studio 2022 with .NET desktop development installed. author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/win_xamarin_forms/_index.md b/content/learning-paths/laptops-and-desktops/win_xamarin_forms/_index.md index 719a2ddd30..9e91aca4a8 100644 --- a/content/learning-paths/laptops-and-desktops/win_xamarin_forms/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_xamarin_forms/_index.md @@ -11,7 +11,7 @@ learning_objectives: - Learn how to use the Model-View-ViewModel (MVVM) architectural pattern prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm[virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Visual Studio 2022 with .NET desktop development and Universal Windows Platform development installed. author_primary: Dawid Borycki diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/1-first-vs-project.md b/content/learning-paths/laptops-and-desktops/windows_armpl/1-first-vs-project.md index 8d7669abc5..2332c2b4b4 100644 --- a/content/learning-paths/laptops-and-desktops/windows_armpl/1-first-vs-project.md +++ b/content/learning-paths/laptops-and-desktops/windows_armpl/1-first-vs-project.md @@ -1,45 +1,29 @@ --- -title: Create your first Windows on Arm application using Microsoft Visual Studio -weight: 2 +title: Create and Run a Windows on Arm application +weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- +## Create and configure a project -## Install Microsoft Visual Studio +You are now ready to create a Windows on Arm application. For your first project, you will create a simple console application. -Visual Studio 2022, Microsoft's Integrated Development Environment (IDE), empowers developers to build high-performance applications for the Arm architecture. +The next steps will guide you through how to create and configure your project. -You can learn more about [Visual Studio on Arm-powered devices](https://learn.microsoft.com/en-us/visualstudio/install/visual-studio-on-arm-devices?view=vs-2022) from Microsoft Learn. +Begin by going to the **Start** window, and selecting **Create a new project**. See Figure 1. -Visual Studio 2022 offers different editions tailored to various development needs: - - Community: A free, fully-featured edition ideal for students, open-source contributors, and individual developers. - - Professional: Offers professional developer tools, services, and subscription benefits for small teams. - - Enterprise: Provides the most comprehensive set of tools and services for large teams and enterprise-level development. +![vs_new_proj1.png alt-text#center](./figures/vs_new_proj1.png "Figure 1: Create a new project.") -To select the best edition for you, refer to [Compare Visual Studio 2022 Editions](https://visualstudio.microsoft.com/vs/compare/). +Then, in **Configure your new project**, do the following: -{{% notice Note %}} -This Learning Path uses Visual Studio Community, but you can also use other editions. -{{% /notice %}} +* Select **Console App**. +* Provide a project name, such as `ConsoleApp1`, as Figure 2 shows. +* Click **Create**. -Download and install Visual Studio using the [Visual Studio for Windows on Arm](/install-guides/vs-woa/) install guide. Make sure to install C and C++ support and the LLVM compiler. +![img2 alt-text#center](./figures/vs_new_proj2.png "Figure 2: Configure your new project.") -## Create a sample project - -You are ready to create a sample Windows on Arm application. - -To keep the example clear and concise, you can create a simple console application. - -On the start window, click `Create a new project`. - -![img1](./figures/vs_new_proj1.png) - -In the `Create a new project` window, select `Console App`, provide a project name, such as `hello-world-1`, and then click `Next`. - -![img2](./figures/vs_new_proj2.png) - -After the project is created, you will see a line of `Hello, world!` code in the newly created C++ file. +After you have created the project, you will see a line of code that says `Hello, World!` in the newly-created C++ file. ```C++ #include @@ -50,36 +34,40 @@ int main() } ``` -Microsoft Visual Studio automatically configures the build environment for the current hardware's CPU architecture. However, you can still familiarize ourselves with the relevant settings. +Whilst Microsoft Visual Studio automatically configures the build environment for the hardware of the CPU architecture, you still benefit from familiarizing yourself with the relevant configuration settings. So continue to learn more about how to get set up. -## ARM64 Configuration Setting +## ARM64 Configuration Settings -Click the `Debug` drop down and select `Configuration Manager...` +Now click on the **Debug** drop-down menu, and select **Configuration Manager...** - ![img4](./figures/vs_console_config1.png) + ![img4 alt-text#center](./figures/vs_console_config1.png "Figure 3: Select Configuration Manager.") -In the `Project contexts` area you see the platform set to `ARM64`. +In the **Project contexts** area, you will see the platform set to `ARM64`, as Figure 4 shows. - ![img5](./figures/vs_console_config2.png) + ![img5 alt-text#center](./figures/vs_console_config2.png "Figure 4: Project Contexts Menu.") -Click `Build -> Build Solution` and your application compiles successfully. +Now click **Build**, then **Build Solution**, and your application will compile. ## Run your first Windows on Arm application -Use the green arrow to run the program you just compiled, and you'll see the print statement from your code correctly executed in the console. +Use the green arrow to run the program you compiled, and you will see the print statement from your code correctly executed in the console. + + ![img6 alt-text#center](./figures/vs_console_exe.png "Figure 5: The Console.") - ![img6](./figures/vs_console_exe.png) +You can also use the tools that Visual Studio provides to check the compiled executable. -You can also use the tools provided by Visual Studio to check the compiled executable. +Visual Studio includes the command-line tool [dumpbin](https://learn.microsoft.com/en-us/cpp/build/reference/dumpbin-reference?view=msvc-170), and you can use it to analyze binary files such as: -The [dumpbin](https://learn.microsoft.com/en-us/cpp/build/reference/dumpbin-reference?view=msvc-170) command-line tool is included with Microsoft Visual Studio. It's used to analyze binary files like executable files (.exe), object files (.obj), and dynamic-link libraries (.dll). +* Executable files (.exe). +* Object files (.obj). +* Dynamic-link libraries (.dll). -To use `dumpbin` open a Command Prompt with Visual Studio configured by opening Windows search, and looking for `Arm64 Native Tools Command Prompt for VS 2022`. Find and open this application. +To use `dumpbin`, open a command prompt with Visual Studio configured by opening Windows search, and then looking for `Arm64 Native Tools Command Prompt for VS 2022`. Once you have found this application, open it. -A new Command Prompt opens. It's the same as the regular Windows Command Prompt with the addition that Visual Studio tools can be run from the prompt. +A new command prompt opens. It is the same as the regular Windows command prompt, but with the added benefit that you can run Visual Studio tools. -Run the command below with the executable you crated as an argument: +Run the command below, replacing the text with the details of the executable that you created as an argument: ```cmd dumpbin /headers \ConsoleApp1.exe @@ -87,6 +75,6 @@ dumpbin /headers \ConsoleApp1.exe You can see that the file format shows `AA64 machine (ARM64)` in the file header. - ![img7](./figures/vs_checkmachine.jpeg) + ![img7 alt-text#center](./figures/vs_checkmachine.jpeg "Figure 6: AA64 Machine in File Header.") -Continue to the next page to build and run a more computation intensive application. \ No newline at end of file +Continue to the next page to get set up with Git before you move on to build and run a more computationally-intensive application. \ No newline at end of file diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/1a-get-started.md b/content/learning-paths/laptops-and-desktops/windows_armpl/1a-get-started.md new file mode 100644 index 0000000000..fb76878a0c --- /dev/null +++ b/content/learning-paths/laptops-and-desktops/windows_armpl/1a-get-started.md @@ -0,0 +1,29 @@ +--- +title: Before you begin +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Get set up with Microsoft Visual Studio + +Visual Studio 2022 is an Integrated Development Environment (IDE) developed by Microsoft that empowers developers to build high-performance applications for the Arm architecture. + +You can learn more about [Microsoft Visual Studio on Arm-powered devices](https://learn.microsoft.com/en-us/visualstudio/install/visual-studio-on-arm-devices?view=vs-2022) from the Microsoft Learn website. + +There are three editions of Visual Studio 2022 that are tailored to various development needs: + - Community Edition is a free, fully-featured edition ideal for students, open source contributors, and individual developers. + - Professional Edition offers professional developer tools, services, and subscription benefits for small teams. + - Enterprise Edition provides the most comprehensive set of tools and services for large teams and enterprise-level development. + +To establish which edition is best-suited to your needs, see [Compare Visual Studio 2022 Editions](https://visualstudio.microsoft.com/vs/compare/). + +{{% notice Note %}} +This Learning Path uses the Community Edition of Visual Studio 2022, but you can also use other editions. +{{% /notice %}} + +Download and install Visual Studio using the [Visual Studio for Windows on Arm](/install-guides/vs-woa/) install guide. + +Make sure you install C and C++ support, and the LLVM compiler. + diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/2-multithreading.md b/content/learning-paths/laptops-and-desktops/windows_armpl/2-multithreading.md index 2d0d85c6f6..e90a6f4b9c 100644 --- a/content/learning-paths/laptops-and-desktops/windows_armpl/2-multithreading.md +++ b/content/learning-paths/laptops-and-desktops/windows_armpl/2-multithreading.md @@ -1,62 +1,46 @@ --- -title: Build a simple numerical application and profile the performance -weight: 3 +title: Build and Profile an Application with Spin the Cube and Visual Studio +weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- +## Get started with Spin the Cube -## Install Git for Windows on Arm +In Windows File Explorer, double-click **SpinTheCubeInGDI.sln** to open the project in Visual Studio. -This section uses an example application from GitHub to demonstrate the use of Arm Performance Libraries. +The source file **SpinTheCubeInGDI.cpp** will then implement a spinning cube. -Start by installing Git using the [Git install guide](/install-guides/git-woa/) for Windows on Arm. - -## Clone the example from GitHub - -The example application renders a rotating 3D cube to perform the calculations on different programming options. - -First, navigate to an empty directory and clone the example repository from GitHub: - -```cmd -git clone https://github.com/odincodeshen/SpinTheCubeInGDI.git -``` - -{{% notice Note %}} -The example repository is forked from the [original GitHub repository](https://github.com/marcpems/SpinTheCubeInGDI) and some minor modifications have been made to aid learning. -{{% /notice %}} - -## Spin the cube introduction - -In Windows File Explorer, double-click `SpinTheCubeInGDI.sln` to open the project in Visual Studio. - -The source file `SpinTheCubeInGDI.cpp` implements a spinning cube. - -The four key components are: - - Shape Generation: Generates the vertices for a sphere using a golden ratio-based algorithm. - - Rotation Calculation: - The application uses a rotation matrix to rotate the 3D shape around the X, Y, and Z axes. The rotation angle is incremented over time, creating the animation. - - Drawing: The application draws the transformed vertices of the shapes on the screen, using a Windows API. - - Performance Measurement: The code measures and displays the number of transforms per second. +The four key components of the application are: + + - Shape generation: the application generates the vertices for a sphere using a golden ratio-based algorithm. + + - Rotation calculation: the application uses a rotation matrix to rotate the 3D shape around the X, Y, and Z axes. The rotation angle is incremented over time, creating the animation. + + - Drawing: the application draws the transformed vertices of the shapes on the screen, using a Windows API. + + - Performance measurement: the code measures and displays the number of transforms per second. The code has two options to calculate the rotation: - 1. Multithreading: The application utilizes multithreading to improve performance by distributing the rotation calculations across multiple threads. - 2. Arm Performance Libraries: The application utilizes optimized math library functions for the rotation calculations. + 1. Multithreading: the application utilizes multithreading to improve performance by distributing the rotation calculations across multiple threads. + 2. Arm Performance Libraries: the application utilizes optimized math library functions for the rotation calculations. Option 1 is explained below and option 2 is explained on the next page. By trying both methods you can compare and contrast the code and the performance. ## Option 1: Multithreading -One way to speed up the rotation calculations is to use multithreading. +One way that you can speed up the rotation calculations is to use multithreading. The multithreading implementation option involves two functions: - - The `CalcThreadProc()` function + - The `CalcThreadProc()` function: This function is the entry point for each calculation thread. Each calculation thread waits on its semaphore in `semaphoreList`. - When a thread receives a signal, it calls `applyRotation()` to transform its assigned vertices. The updated vertices are stored in the `drawSphereVertecies` vector. The code is shown below: + When a thread receives a signal, it calls `applyRotation()` to transform its assigned vertices. The updated vertices are stored in the `drawSphereVertecies` vector. + + The code is shown below: ```c++ DWORD WINAPI CalcThreadProc(LPVOID data) @@ -76,7 +60,7 @@ The multithreading implementation option involves two functions: applyRotation(UseCube ? cubeVertices : sphereVertices, rotationInX, threadNum * pointStride, pointStride); LeaveCriticalSection(&cubeDraw[threadNum]); - // set a semaphore to say we are done + // set a semaphore to say you are done ReleaseSemaphore(doneList[threadNum], 1, NULL); } @@ -106,7 +90,7 @@ The multithreading implementation option involves two functions: { counter++; - // take the next three values for a 3d point + // take the next three values for a 3D point refx = *point; point++; refy = *point; point++; refz = *point; point++; @@ -129,21 +113,23 @@ The multithreading implementation option involves two functions: ## Build and run the application -After gaining a general understanding of the project, you can compile and run it. +After grasping a basic understanding of the project, you can compile and run it. -Build the project, and run `SpinTheCubeInGDI.exe` +Build the project and run `SpinTheCubeInGDI.exe`. You will see a simulated 3D sphere continuously rotating. -The number in application represents the number of frames per second (FPS). A higher number indicates better performance. +The number in the application represents the number of Frames Per Second (FPS). + +A higher number indicates more frames per second, which indicates improved performance. - ![gif1](./figures/multithreading.gif) + ![gif1 alt-text#center](./figures/multithreading.gif "Figure 7: Spin The Cube Simulated 3D Sphere.") -Performance varies across various Windows on Arm computers, but on the Lenovo X13s the performance generally falls between 3k and 6k FPS. +Performance varies across different Windows on Arm computers, but on the Lenovo X13s specifically, the performance generally falls between 3K and 6K FPS. You can use the [Visual Studio profiling tools](https://learn.microsoft.com/en-us/visualstudio/profiling/profiling-feature-tour?view=vs-2022) to observe the dynamic CPU and memory usage while the program is running. - ![img8](./figures/mt_cpumem_usage1.png) + ![img8 alt-text#center](./figures/mt_cpumem_usage1.png "Figure 8: Using Visual Studio Profiling Tools.") -Continue to the next section to learn how to improve performance using Arm Performance Libraries. +Continue learning to find out how you can optimize performance using Arm Performance Libraries. diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/2a-get-set-up-git.md b/content/learning-paths/laptops-and-desktops/windows_armpl/2a-get-set-up-git.md new file mode 100644 index 0000000000..23f85cfe6a --- /dev/null +++ b/content/learning-paths/laptops-and-desktops/windows_armpl/2a-get-set-up-git.md @@ -0,0 +1,28 @@ +--- +title: Git setup +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Install Git for Windows on Arm + +This section uses an example application from GitHub to demonstrate the use of Arm Performance Libraries. + +If you don't already have Git installed, start by installing it using the [Git for Windows on Arm](/install-guides/git-woa/) install guide. + +## Clone the Example from GitHub + +The example application renders a rotating 3D cube to perform the calculations using different programming options. + +First, navigate to an empty directory, and clone the repository containing the example from GitHub: + +```cmd +git clone https://github.com/odincodeshen/SpinTheCubeInGDI.git +``` + +{{% notice Note %}} +The repository containing the example is forked from the [original GitHub repository for Spin the Cube](https://github.com/marcpems/SpinTheCubeInGDI) with some modifications for demonstration purposes. +{{% /notice %}} + diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/3-apt-enhancement.md b/content/learning-paths/laptops-and-desktops/windows_armpl/3-apt-enhancement.md index e6dccf8a82..50c0f789a2 100644 --- a/content/learning-paths/laptops-and-desktops/windows_armpl/3-apt-enhancement.md +++ b/content/learning-paths/laptops-and-desktops/windows_armpl/3-apt-enhancement.md @@ -1,6 +1,6 @@ --- -title: Use Arm Performance Libraries to improve performance -weight: 4 +title: Use Arm Performance Libraries to Optimize Performance +weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall @@ -8,11 +8,9 @@ layout: learningpathall ## Introducing Arm Performance Libraries -In the previous section, you gained some understanding of the performance of the first calculation option, multithreading. +Now that you have seen the performance of multithreading, you can move on to deploying Arm Performance Libraries, and you can explore the differences. -Now, use option 2, Arm Performance Libraries, and explore the differences. - -[Arm Performance Libraries](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Libraries) provides optimized standard core math libraries for numerical applications on 64-bit Arm-based processors. The libraries are built with OpenMP across many BLAS, LAPACK, FFT, and sparse routines in order to maximize your performance in multi-processor environments. +[Arm Performance Libraries](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Libraries) provides optimized standard core math libraries for numerical applications on 64-bit Arm-based processors. The libraries are built with OpenMP across many Basic Linear Algebra Subprograms (BLAS), LAPACK, FFT, and sparse routines in order to maximize performance in multi-processor environments. Use the [Arm Performance Libraries install guide](/install-guides/armpl/) to install Arm Performance Libraries on Windows 11. @@ -24,37 +22,37 @@ The `include` and `lib` are the directories containing header files and library Take note of the location of these two directories, as you will need them for configuring Visual Studio. - ![img9](./figures/apl_directory.png) + ![img9 alt-text#center](./figures/apl_directory.png "Figure 9: Arm Performance Libraries Directory.") -## Include Arm Performance Libraries into Visual Studio +## Add Arm Performance Libraries to Visual Studio To use Arm Performance Libraries in the application, you need to manually add the paths into Visual Studio. -You need to configure two places in your Visual Studio project: +In your Visual Studio project, configure two places in your Visual Studio project: ### External Include Directories: -1. In the Solution Explorer, right-click on your project and select `Properties`. -2. In the left pane of the Property Pages, expand `Configuration Properties`. Select `VC++ Directories` +1. In the Solution Explorer, right-click on your project and select **Properties**. +2. In the left pane of the Property Pages, expand `Configuration Properties`. Select `VC++ Directories`. 3. In the right pane, find the `Additional Include Directories` setting. -4. Click on the dropdown menu. Select `` +4. Click on the dropdown menu. Select ``. 5. In the dialog that opens, click the `New Line` icon to add Arm Performance Libraries `include` path. -![img10](./figures/ext_include.png) +![img10 alt-text#center](./figures/ext_include.png "Figure 10: External Include Directories.") ### Additional Library Directories: -1. In the Solution Explorer, right-click on your project and select `Properties`. -2. In the left pane of the Property Pages, expand `Configuration Properties`. Select `Linker` -3. In the right pane, find the `Additional Library Directories` setting. +1. In the Solution Explorer, right-click on your project, and select **Properties**. +2. In the left pane of the Property Pages, expand **Configuration Properties**. Select **Linker**. +3. In the right pane, find the **Additional Library Directories** setting. 4. Click on the dropdown menu. Select `` 5. In the dialog that opens, click the `New Line` icon to add Arm Performance Libraries `library` path. -![img11](./figures/linker_lib.png) +![img11 alt-text#center](./figures/linker_lib.png "Figure 11: Linker Library.") {{% notice Note %}} -Visual Studio allows users to set the above two paths for each individual configuration. To apply the settings to all configurations in your project, select `All Configurations` in the `Configuration` dropdown menu. +Visual Studio allows users to set the above two paths for each individual configuration. To apply the settings to all configurations in your project, select **All Configurations** in the **Configuration** drop-down menu. {{% /notice %}} @@ -64,14 +62,14 @@ You are now ready to use Arm Performance Libraries in your project. Open the source code file `SpinTheCubeInGDI.cpp` and search for the `_USE_ARMPL_DEFINES` definition. -You should see a commented-out definition on line 13 of the program. Removing the comment will enable the Arm Performance Libraries feature when you re-build the application. +You will see a commented-out definition on line 13 of the program. Removing the comment enables the Arm Performance Libraries feature when you rebuild the application. - ![img12](./figures/apl_define.png) + ![img12 alt-text#center](./figures/apl_define.png "Figure 12: Arm Performance Libraries Definition.") -When variable useAPL is True, the application will call `applyRotationBLAS()` instead of the multithreading code to apply the rotation matrix to the 3D vertices. +When variable useAPL is True, the application calls `applyRotationBLAS()` instead of the multithreading code, to apply the rotation matrix to the 3D vertices. -The code is below: +The code is shown below: ```c++ void RotateCube(int numCores) @@ -97,9 +95,11 @@ void RotateCube(int numCores) } ``` -The `applyRotationBLAS()` function adopts a BLAS matrix multiplier instead of multithreading multiplication for calculating rotation. +The `applyRotationBLAS()` function adopts a BLAS matrix multiplier instead of multithreading multiplication for calculating the rotation. + +Basic Linear Algebra Subprograms (BLAS) are a set of well-defined basic linear algebra operations in Arm Performance Libraries. -Basic Linear Algebra Subprograms (BLAS) are a set of well defined basic linear algebra operations in Arm Performance Libraries, check [cblas_dgemm](https://developer.arm.com/documentation/101004/2410/BLAS-Basic-Linear-Algebra-Subprograms/CBLAS-functions/cblas-dgemm?lang=en) to learn more about the function. +See [cblas_dgemm](https://developer.arm.com/documentation/101004/2410/BLAS-Basic-Linear-Algebra-Subprograms/CBLAS-functions/cblas-dgemm?lang=en) to learn more about the function. Here is the code used to compute rotation with BLAS: @@ -124,19 +124,19 @@ void applyRotationBLAS(std::vector& shape, const std::vector& ro Rebuild the code and run `SpinTheCubeInGDI.exe` again. -Click on the "Options" menu in the top left corner of the program, then select "Use APL" to utilize Option 2. +Click on the **Options** menu in the top-left corner of the program, then select **Use APL** to utilize Option 2. - ![img13](./figures/use_apl.png) + ![img13 alt-text#center](./figures/use_apl.png "Figure 13: Selecting Arm Performance Libraries.") On the Lenovo X13s, the performance is between 11k and 12k FPS. -![gif2](./figures/apl_enable.gif) +![gif2 alt-text#center](./figures/apl_enable.gif "Figure 14: Spinning Geometry Demonstration: Arm64.") Re-run the profiling tools. -You see that the CPU usage has decreased significantly. There is no difference in memory usage. +You will see that the CPU usage has decreased significantly. There is no difference in memory usage. - ![img14](./figures/apl_on_cpu_mem_usage.png) + ![img14 alt-text#center](./figures/apl_on_cpu_mem_usage.png "Figure 15: Improved CPU Performance.") -You have learned how to improve application performance using Arm Performance Libraries. +You have learned how to optimize application performance using Arm Performance Libraries. diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/_index.md b/content/learning-paths/laptops-and-desktops/windows_armpl/_index.md index 1b47586d0b..bd756e23f6 100644 --- a/content/learning-paths/laptops-and-desktops/windows_armpl/_index.md +++ b/content/learning-paths/laptops-and-desktops/windows_armpl/_index.md @@ -1,20 +1,16 @@ --- title: Optimize Windows applications using Arm Performance Libraries -draft: true -cascade: - draft: true - minutes_to_complete: 60 -who_is_this_for: This is an introductory topic for software developers who want to improve computation performance of Windows on Arm applications using Arm Performance Libraries. +who_is_this_for: This is an introductory topic for software developers who want to improve the performance of Windows on Arm applications using Arm Performance Libraries. learning_objectives: - - Develop Windows on Arm applications using Microsoft Visual Studio. - - Utilize Arm Performance Libraries to increase application performance. + - Develop a Windows on Arm application using Microsoft Visual Studio. + - Utilize Arm Performance Libraries to optimize the performance of an application. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit) or Lenovo Thinkpad X13s running Windows 11. + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11. author_primary: Odin Shen diff --git a/content/learning-paths/laptops-and-desktops/wsl2/_index.md b/content/learning-paths/laptops-and-desktops/wsl2/_index.md index ee1fdd8797..6cf131e247 100644 --- a/content/learning-paths/laptops-and-desktops/wsl2/_index.md +++ b/content/learning-paths/laptops-and-desktops/wsl2/_index.md @@ -15,7 +15,7 @@ learning_objectives: - Export the WSL file system as a backup prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit) or Lenovo Thinkpad X13s running Windows 11. + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11. author_primary: Jason Andrews diff --git a/content/learning-paths/laptops-and-desktops/wsl2/setup.md b/content/learning-paths/laptops-and-desktops/wsl2/setup.md index f1a96fb493..7fab780174 100644 --- a/content/learning-paths/laptops-and-desktops/wsl2/setup.md +++ b/content/learning-paths/laptops-and-desktops/wsl2/setup.md @@ -9,7 +9,7 @@ layout: "learningpathall" ## Before you begin -This Learning Path assumes you have a Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit) or a Lenovo Thinkpad X13s laptop running Windows 11. +This Learning Path assumes you have a Windows on Arm computer such as the Lenovo Thinkpad X13s laptop running Windows 11. WSL is useful if you are developing on Arm virtual machine instances in the cloud. It is also useful if you are developing with embedded Linux on Arm on single board computers. diff --git a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/_index.md b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/_index.md index e31e5ab61b..6bbdc37020 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/_index.md +++ b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/_index.md @@ -12,7 +12,7 @@ learning_objectives: prerequisites: - An Arm-powered Android smartphone, and a USB cable to connect to it. - - For profiling the ML inference, [Arm NN ExecuteNetwork](https://github.com/ARM-software/armnn/releases). + - For profiling the ML inference, [Arm NN ExecuteNetwork](https://github.com/ARM-software/armnn/releases) or [ExecuTorch](https://github.com/pytorch/executorch). - For profiling the application, [Arm Performance Studio with Streamline](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio). - Android Studio Profiler. diff --git a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/app-profiling-streamline.md b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/app-profiling-streamline.md index c72893edb1..118c9176e6 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/app-profiling-streamline.md +++ b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/app-profiling-streamline.md @@ -128,8 +128,8 @@ Now add the code below to the `build.gradle` file of the Module you wish to prof ```gradle externalNativeBuild { cmake { - path file('src/main/cpp/CMakeLists.txt') - version '3.22.1' + path = file("src/main/cpp/CMakeLists.txt") + version = "3.22.1" } } ``` diff --git a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executenetwork.md b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executenetwork.md index d200e02272..4bc284297a 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executenetwork.md +++ b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executenetwork.md @@ -7,7 +7,7 @@ layout: learningpathall --- ## Arm NN Network Profiler -One way of running LiteRT models is to use Arm NN, which is open-source network machine learning (ML) software. This is available as a delegate to the standard LiteRT interpreter. But to profile the model, Arm NN comes with a command-line utility called `ExecuteNetwork`. This program runs the model without the rest of the app. It is able to output layer timings and other useful information to report where there might be bottlenecks within your model. +One way of running LiteRT models is to use Arm NN, which is open-source machine learning (ML) software. This is available as a delegate to the standard LiteRT interpreter. But to profile the model, Arm NN comes with a command-line utility called `ExecuteNetwork`. This program runs the model without the rest of the app. It is able to output layer timings and other useful information to report where there might be bottlenecks within your model. If you are using LiteRT without Arm NN, then the output from `ExecuteNetwork` is more of an indication than a definitive answer, but it can still be useful in identifying any obvious problems. diff --git a/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md new file mode 100644 index 0000000000..2c35a45492 --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executorch.md @@ -0,0 +1,101 @@ +--- +title: ML Profiling of an ExecuTorch model +weight: 7 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## ExecuTorch Profiling Tools +You can use [ExecuTorch](https://pytorch.org/executorch/stable/index.html) for running PyTorch models on constrained devices like mobile. As so many models are developed in PyTorch, this is a useful way to quickly deploy them to mobile devices, without the requirement for conversion tools such as Google's [ai-edge-torch](https://github.com/google-ai-edge/ai-edge-torch) to convert them into tflite. + +To get started on ExecuTorch, you can follow the instructions on the [PyTorch website](https://pytorch.org/executorch/stable/getting-started-setup). To then deploy on Android, you can also find instructions on the [Pytorch website](https://pytorch.org/executorch/stable/demo-apps-android.html). If you do not already have ExecuTorch running on Android, follow these instructions first. + +ExecuTorch comes with a set of profiling tools, but currently they are aimed at Linux, and not Android. The instructions to profile on Linux are [here](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html), and you can adapt them for use on Android. + +## Profiling on Android + +To profile on Android, the steps are the same as for [Linux](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html), except that you need to generate the ETDump file on an Android device. + +To start, generate the ETRecord in exactly the same way as described for the Linux instructions. + +Next, follow the instructions to create the ExecuTorch bundled program that you will need to generate the ETDump. You will copy this to your Android device together with the runner program that you are about to compile. + +To compile the runner program, you will need to adapt the `build_example_runner.sh` script in the instructions that are located in the `examples/devtools` subfolder of the ExecuTorch repository to compile it for Android. Copy the script and rename the file to `build_android_example_runner.sh`, ready for editing. Remove all lines with `coreml` in them, and the options dependent on it, as these are not needed for Android. + +You then need to set the `ANDROID_NDK` environment variable to point to your Android NDK installation. + +At the top of the `main()` function add: + +```bash + export ANDROID_NDK=~/Android/Sdk/ndk/28.0.12674087 # replace this with the correct path for your NDK installation + export ANDROID_ABI=arm64-v8a +``` + +Next, add Android options to the first `cmake` configuration line in `main()`, that configures the building of the ExecuTorch library. + +Change it to: + +```bash + cmake -DCMAKE_INSTALL_PREFIX=cmake-out \ + -DCMAKE_BUILD_TYPE=Release \ + -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}/build/cmake/android.toolchain.cmake" \ + -DANDROID_ABI="${ANDROID_ABI}" \ + -DEXECUTORCH_BUILD_XNNPACK=ON \ + -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \ + -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \ + -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \ + -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \ + -DEXECUTORCH_BUILD_DEVTOOLS=ON \ + -DEXECUTORCH_ENABLE_EVENT_TRACER=ON \ + -Bcmake-out . +``` + +The `cmake` build step for the ExecuTorch library stays the same, as do the next lines setting up local variables. + +Next you will adapt the options to Android in the second `cmake` configuration line, which is the one that configures the building of the runner. + +Change it to: + +```bash + cmake -DCMAKE_PREFIX_PATH="${cmake_prefix_path}" \ + -Dexecutorch_DIR="${PWD}/cmake-out/lib/cmake/ExecuTorch" -Dgflags_DIR="${PWD}/cmake-out/third-party/gflags" \ + -DCMAKE_BUILD_TYPE=Release \ + -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}/build/cmake/android.toolchain.cmake" \ + -DANDROID_ABI="${ANDROID_ABI}" \ + -B"${build_dir}" \ + "${example_dir}" +``` + +Once you have changed the configuration lines, you can now run the script `./build_android_example_runner.sh` to build the runner program. + +Once compiled, find the executable `example_runner` in `cmake-out/examples/devtools/`. + +Copy `example_runner` and the ExecuTorch bundled program to your Android device. + +Do this with adb: + +```bash +adb push example_runner /data/local/tmp/ +adb push bundled_program.bp /data/local/tmp/ +adb shell +chmod 777 /data/local/tmp/example_runner +./example_runner --bundled_program_path="bundled_program.bp" +exit +adb pull /data/local/tmp/etdump.etdp . +``` + +You now have the ETDump file ready to analyze with an ExecuTorch Inspector, in line with the Linux instructions. + +To get a full display of the operators and their timings, use the following: + +```python +from executorch.devtools import Inspector + +etrecord_path = "etrecord.bin" +etdump_path = "etdump.etdp" +inspector = Inspector(etdump_path=etdump_path, etrecord=etrecord_path) +inspector.print_data_tabular() +``` + +However, as the [ExecuTorch profiling page](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html) explains, there are data analysis options available. These enable you to quickly find specified criteria such as the slowest layer or group operators. Both the `EventBlock` and `DataFrame` approaches work well. However, at time of writing, the `find_total_for_module()` function has a [bug](https://github.com/pytorch/executorch/issues/7200) and returns incorrect values - hopefully this will soon be fixed. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 7bbc358fd5..1a34b417a5 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -11,7 +11,7 @@ learning_objectives: - Modify code on a Windows on Arm development machine. - Deploy a .NET Aspire application to Arm-powered virtual machines in the Cloud. prerequisites: - - A Windows on Arm machine, for example the [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), or a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. + - A Windows on Arm machine, for example the Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. - An [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp/) from AWS or GCP. - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is an example of a suitable editor. diff --git a/content/learning-paths/servers-and-cloud-computing/rag/_demo.md b/content/learning-paths/servers-and-cloud-computing/rag/_demo.md index 19b3a9c2e5..ca62fbf8e4 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/_demo.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/_demo.md @@ -1,18 +1,21 @@ --- title: Run a llama.cpp chatbot powered by Arm Kleidi technology +weight: 2 overview: | - This Arm learning path shows how to use a single c4a-highcpu-72 Google Axion instance -- powered by an Arm Neoverse CPU -- to build a simple "Token as a Service" RAG-enabled server, used below to provide a chatbot to serve a small number of concurrent users. + This Learning Path shows you how to use a c4a-highcpu-72 Google Axion instance powered by an Arm Neoverse CPU to build a simple Token-as-a-Service (TaaS) RAG-enabled server that you can then use to provide a chatbot to serve a small number of concurrent users. - This architecture would be suitable for businesses looking to deploy the latest Generative AI technologies with RAG capabilities using their existing CPU compute capacity and deployment pipelines. It enables semantic search over chunked documents using FAISS vector store. The demo uses the open source llama.cpp framework, which Arm has enhanced by contributing the latest Arm Kleidi technologies. Further optimizations are achieved by using the smaller 8 billion parameter Llama 3.1 model, which has been quantized to optimize memory usage. + This architecture is suitable for businesses looking to deploy the latest Generative AI technologies with RAG capabilities using their existing CPU compute capacity and deployment pipelines. + + It enables semantic search over chunked documents using the FAISS vector store. The demo uses the open source llama.cpp framework, which Arm has enhanced with its own Kleidi technologies. Further optimizations are achieved by using the smaller 8 billion parameter Llama 3.1 model, which has been quantized to optimize memory usage. - Chat with the Llama-3.1-8B RAG-enabled LLM below to see the performance for yourself, then follow the learning path to build your own Generative AI service on Arm Neoverse. + Chat with the Llama-3.1-8B RAG-enabled LLM below to see the performance for yourself, and then follow the Learning Path to build your own Generative AI service on Arm Neoverse. demo_steps: - - Type & send a message to the chatbot. + - Type and send a message to the chatbot. - Receive the chatbot's reply, including references from RAG data. - - View stats showing how well Google Axion runs LLMs. + - View performance statistics demonstrating how well Google Axion runs LLMs. diagram: config-diagram-dark.png diagram_blowup: config-diagram.png diff --git a/content/learning-paths/servers-and-cloud-computing/rag/_index.md b/content/learning-paths/servers-and-cloud-computing/rag/_index.md index ebfe968750..3a8fca7ce0 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/_index.md @@ -1,5 +1,5 @@ --- -title: Deploy a RAG-based Chatbot with llama-cpp-python using KleidiAI on Arm Servers +title: Deploy a RAG-based Chatbot with llama-cpp-python using KleidiAI on Google Axion processors minutes_to_complete: 45 @@ -13,6 +13,7 @@ learning_objectives: - Monitor and analyze inference performance metrics. prerequisites: + - A Google Cloud Axion (or other Arm) compute instance with at least 16 cores, 8GB of RAM, and 32GB disk space. - Basic understanding of Python and ML concepts. - Familiarity with REST APIs and web services. - Basic knowledge of vector databases. @@ -20,10 +21,6 @@ prerequisites: author_primary: Nobel Chowdary Mandepudi -draft: true -cascade: - draft: true - ### Tags skilllevels: Advanced armips: @@ -34,6 +31,7 @@ operatingsystems: tools_software_languages: - Python - Streamlit + - Google Axion ### FIXED, DO NOT MODIFY # ================================================================================ diff --git a/content/learning-paths/servers-and-cloud-computing/rag/backend.md b/content/learning-paths/servers-and-cloud-computing/rag/backend.md index de50065fc6..220a5b3356 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/backend.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/backend.md @@ -1,6 +1,6 @@ --- title: Deploy a RAG-based LLM backend server -weight: 3 +weight: 4 layout: learningpathall --- diff --git a/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md b/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md index fbd872adf5..1cd6eb3488 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md @@ -1,21 +1,33 @@ --- title: The RAG Chatbot and its Performance -weight: 5 +weight: 6 layout: learningpathall --- ## Access the Web Application -Open the web application in your browser using either the local URL or the external URL: +Open the web application in your browser using the external URL: ```bash -http://localhost:8501 or http://75.101.253.177:8501 +http://[your instance ip]:8501 ``` {{% notice Note %}} -To access the links you may need to allow inbound TCP traffic in your instance's security rules. Always review these permissions with caution as they may introduce security vulnerabilities. +To access the links you might need to allow inbound TCP traffic in your instance's security rules. Always review these permissions with caution as they might introduce security vulnerabilities. + +For an Axion instance, you can do this from the gcloud cli: + +gcloud compute firewall-rules create allow-my-ip \ + --direction=INGRESS \ + --network=default \ + --action=ALLOW \ + --rules=tcp:8501 \ + --source-ranges=[your IP]/32 \ + --target-tags=allow-my-ip + +For this to work, you must ensure that the allow-my-ip tag is present on your Axion instance. {{% /notice %}} ## Upload a PDF File and Create a New Index @@ -31,7 +43,7 @@ Follow these steps to create a new index: 5. Enter a name for your vector index. 6. Click the **Create Index** button. -Upload the Cortex-M processor comparison document, which can be downloaded from [this website](https://developer.arm.com/documentation/102787/latest/). +Upload the Cortex-M processor comparison document, which can be downloaded from [the Arm developer website](https://developer.arm.com/documentation/102787/latest/). You should see a confirmation message indicating that the vector index has been created successfully. Refer to the image below for guidance: @@ -44,15 +56,15 @@ After creating the index, you can switch to the **Load Existing Store** option a Follow these steps: 1. Switch to the **Load Existing Store** option in the sidebar. -2. Select the index you created. It should be auto-selected if it's the only one available. +2. Select the index you created. It should be auto-selected if it is the only one available. -This will allow you to use the uploaded document for generating contextually-relevant responses. Refer to the image below for guidance: +This allows you to use the uploaded document for generating contextually-relevant responses. Refer to the image below for guidance: ![RAG_IMG2](rag_img2.png) ## Interact with the LLM -You can now start asking various queries to the LLM using the prompt in the web application. The responses will be streamed both to the frontend and the backend server terminal. +You can now start issuing various queries to the LLM using the prompt in the web application. The responses will be streamed both to the frontend and the backend server terminal. Follow these steps: @@ -61,7 +73,7 @@ Follow these steps: ![RAG_IMG3](rag_img3.png) -While the response is streamed to the frontend for immediate viewing, you can monitor the performance metrics on the backend server terminal. This gives you insights into the processing speed and efficiency of the LLM. +While the response is streamed to the frontend for immediate viewing, you can monitor the performance metrics on the backend server terminal. This provides insights into the processing speed and efficiency of the LLM. ![RAG_IMG4](rag_img4.png) diff --git a/content/learning-paths/servers-and-cloud-computing/rag/frontend.md b/content/learning-paths/servers-and-cloud-computing/rag/frontend.md index 51cb4eb33a..fd72eaa099 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/frontend.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/frontend.md @@ -1,6 +1,6 @@ --- title: Deploy RAG-based LLM frontend server -weight: 4 +weight: 5 layout: learningpathall --- diff --git a/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md b/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md index 7725d7658e..b375428528 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md @@ -2,7 +2,7 @@ # User change title: "Set up a RAG based LLM Chatbot" -weight: 2 # 1 is first, 2 is second, etc. +weight: 3 # Do not modify these elements layout: "learningpathall" @@ -10,7 +10,7 @@ layout: "learningpathall" ## Before you begin -This learning path demonstrates how to build and deploy a Retrieval Augmented Generation (RAG) enabled chatbot using open-source Large Language Models (LLMs) optimized for Arm architecture. The chatbot processes documents, stores them in a vector database, and generates contextually-relevant responses by combining the LLM's capabilities with retrieved information. The instructions in this Learning Path have been designed for Arm servers running Ubuntu 22.04 LTS. You need an Arm server instance with at least 16 cores and 8GB of RAM to run this example. Configure disk storage up to at least 32GB. The instructions have been tested on an AWS Graviton4 r8g.16xlarge instance. +This Learning Path demonstrates how to build and deploy a Retrieval Augmented Generation (RAG) enabled chatbot using open-source Large Language Models (LLMs) optimized for Arm architecture. The chatbot processes documents, stores them in a vector database, and generates contextually-relevant responses by combining the LLM's capabilities with retrieved information. The instructions in this Learning Path have been designed for Arm servers running Ubuntu 22.04 LTS. You need an Arm server instance with at least 16 cores, 8GB of RAM, and a 32GB disk to run this example. The instructions have been tested on a GCP c4a-standard-64 instance. ## Overview @@ -100,7 +100,7 @@ Download the Hugging Face model: wget https://huggingface.co/chatpdflocal/llama3.1-8b-gguf/resolve/main/ggml-model-Q4_K_M.gguf ``` -## Build llama.cpp & Quantize the Model +## Build llama.cpp and Quantize the Model Navigate to your home directory: diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/_index.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/_index.md index 54966bcf90..c0a2fbd26e 100644 --- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/_index.md @@ -11,9 +11,11 @@ learning_objectives: - Test the reference firmware stack. prerequisites: - - Some understanding of the Reference Design software stack architecture. + - Some understanding of the [Reference Design software stack architecture](https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html). + - Some understanding of the Linux command line. + - Optionally a basic understanding of Docker and containers. -author_primary: Tom Pilar +author_primary: Tom Pilar, Daniel Nguyen ### Tags skilllevels: Introductory diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/build-2.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/build-2.md index e3917bd5e3..94a5ae8661 100644 --- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/build-2.md +++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/build-2.md @@ -79,5 +79,5 @@ lrwxrwxrwx 1 ubuntu ubuntu 30 Jan 12 15:35 tf-bl31.bin -> ../components/rdn lrwxrwxrwx 1 ubuntu ubuntu 33 Jan 12 15:35 uefi.bin -> ../components/css-common/uefi.bin ``` -The `fip-uefi.bin` firmware image will contain the `TF-A BL2` boot loader image which is responsible for unpacking the rest of the firmware as well as the firmware that TF-A BL2 unpacks. This includes the `SCP BL2` (`scp_ramfw.bin`) image that is unpacked by the AP firmware and transferred over to the SCP TCMs using the SCP shared data store module. Along with the FIP image, the FVP also needs the `TF-A BL1` image and the `SCP BL1` (`scp_romfw.bin`) image files. +The `fip-uefi.bin` [firmware image package](https://trustedfirmware-a.readthedocs.io/en/v2.5/getting_started/tools-build.html) will contain the `TF-A BL2` boot loader image which is responsible for unpacking the rest of the firmware as well as the firmware that TF-A BL2 unpacks. This includes the `SCP BL2` (`scp_ramfw.bin`) image that is unpacked by the AP firmware and transferred over to the SCP TCMs using the SCP shared data store module. Along with the FIP image, the FVP also needs the `TF-A BL1` image and the `SCP BL1` (`scp_romfw.bin`) image files. diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/environment-setup-1.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/environment-setup-1.md index 3b96c15e36..0228a2b61e 100644 --- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/environment-setup-1.md +++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/environment-setup-1.md @@ -12,7 +12,7 @@ This learning path is based on the `Neoverse N2` Reference Design (`RD-N2`). ## Before you begin -You can use either an AArch64 or x86_64 host machine running Ubuntu Linux 22.04. 64GB of free disk space and 32GB of RAM is minimum requirement to sync and build the platform software stack. 48GB of RAM is recommended. +You can use either an AArch64 or x86_64 host machine running **Ubuntu Linux 22.04**. 64GB of free disk space and 32GB of RAM is minimum requirement to sync and build the platform software stack. 48GB of RAM is recommended. Follow the instructions to set up your environment using the information found at the [Neoverse RD-N2 documentation site](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdn2.html). @@ -53,7 +53,7 @@ Bug reports: https://bugs.chromium.org/p/gerrit/issues/entry?template=Repo+tool+ Create a new directory in to which you can download the source code, build the stack, and then obtain the manifest file. -To obtain the manifest, choose a tag of the platform reference firmware. [RD-INFRA-2023.09.29](https://neoverse-reference-design.docs.arm.com/en/latest/releases/RD-INFRA-2023.09.29/release_note.html) is used here. See the [release notes](https://neoverse-reference-design.docs.arm.com/en/latest/) for more information. +To obtain the manifest, choose a tag of the platform reference firmware. [RD-INFRA-2023.09.29](https://neoverse-reference-design.docs.arm.com/en/latest/releases/RD-INFRA-2023.09.29/release_note.html) is used here, although it is recommended to use the latest version available. See the [release notes](https://neoverse-reference-design.docs.arm.com/en/latest/) for more information. Specify the platform you would like with the manifest. In the [manifest repo](https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests) there are a number of available platforms. In this case, select `pinned-rdn2.xml`. diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md index 5adbb52e71..f8c0ae9b2d 100644 --- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md +++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md @@ -127,7 +127,7 @@ In your original terminal, launch the FVP using the supplied script: Observe the platform is running successfully: ![fvp terminals alt-text#center](images/uefi.png "Figure 2. FVP Terminals") -To boot into `busy-box`, use: +You can also boot into `busy-box`, using the command: ```bash ./boot.sh -p rdn2 ``` diff --git a/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-intro/_index.md b/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-intro/_index.md index 7ae2fd32f4..525990dacd 100644 --- a/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-intro/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-intro/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Create a project and deploy AWS Lambda function. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. author_primary: Dawid Borycki diff --git a/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-lambda-dynamodb/_index.md b/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-lambda-dynamodb/_index.md index 451f94dc8a..ca05939ab1 100644 --- a/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-lambda-dynamodb/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-lambda-dynamodb/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Automate deployment of AWS Lambda function consuming data from DynamoDB. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11, or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. - Completion of this [Learning Path](/learning-paths/servers-and-cloud-computing/serverless-framework-aws-intro/). diff --git a/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-s3/_index.md b/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-s3/_index.md index bffaae3d9c..879e5f1cae 100644 --- a/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-s3/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/serverless-framework-aws-s3/_index.md @@ -10,7 +10,7 @@ learning_objectives: - Automate deployment of a static website to Amazon S3. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11, or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). + - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/). - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. - Completion of the Learning Path that shows you how to [Deploy AWS services using the Serverless Framework](/learning-paths/servers-and-cloud-computing/serverless-framework-aws-intro/). diff --git a/content/learning-paths/servers-and-cloud-computing/sve/sve_armie.md b/content/learning-paths/servers-and-cloud-computing/sve/sve_armie.md index baed8f4b5b..a8e769c260 100644 --- a/content/learning-paths/servers-and-cloud-computing/sve/sve_armie.md +++ b/content/learning-paths/servers-and-cloud-computing/sve/sve_armie.md @@ -80,10 +80,11 @@ Install `qemu-user` to run the example on processors which do not support SVE: ```bash { command_line="user@localhost" } sudo apt install qemu-user -y ``` -Run the example application with a vector length of 256 bits: + +Run the example application with a vector length of 256 bits, note that the vector length is specified in bytes rather than bits: ```bash { command_line="user@localhost | 2" } -qemu-aarch64 -cpu max,sve-default-vector-length=256 ./sve_add.exe +qemu-aarch64 -cpu max,sve-default-vector-length=32 ./sve_add.exe Done. ``` diff --git a/data/stats_weekly_data.yml b/data/stats_weekly_data.yml index e6d3546c97..d4b8ae9b85 100644 --- a/data/stats_weekly_data.yml +++ b/data/stats_weekly_data.yml @@ -4716,3 +4716,98 @@ avg_close_time_hrs: 0 num_issues: 16 percent_closed_vs_total: 0.0 +- a_date: '2025-01-27' + content: + automotive: 1 + cross-platform: 28 + embedded-and-microcontrollers: 40 + install-guides: 93 + iot: 5 + laptops-and-desktops: 33 + mobile-graphics-and-gaming: 26 + servers-and-cloud-computing: 97 + total: 323 + contributions: + external: 45 + internal: 373 + github_engagement: + num_forks: 30 + num_prs: 12 + individual_authors: + alaaeddine-chakroun: 2 + alexandros-lamprineas: 1 + annie-tallund: 1 + arm: 3 + arnaud-de-grandmaison: 1 + arnaud-de-grandmaison,-paul-howard,-and-pareena-verma: 1 + basma-el-gaabouri: 1 + ben-clark: 1 + bolt-liu: 2 + brenda-strech: 1 + chaodong-gong,-alex-su,-kieran-hejmadi: 1 + chen-zhang: 1 + christopher-seidl: 7 + cyril-rohr: 1 + daniel-gubay: 1 + daniel-nguyen: 1 + david-spickett: 2 + dawid-borycki: 31 + diego-russo: 1 + diego-russo-and-leandro-nunes: 1 + elham-harirpoush: 2 + florent-lebeau: 5 + "fr\xE9d\xE9ric--lefred--descamps": 2 + gabriel-peterson: 5 + gayathri-narayana-yegna-narayanan: 1 + georgios-mermigkis-and-konstantinos-margaritis,-vectorcamp: 1 + graham-woodward: 1 + han-yin: 1 + iago-calvo-lista,-arm: 1 + james-whitaker,-arm: 1 + jason-andrews: 95 + joe-stech: 1 + johanna-skinnider: 2 + jonathan-davies: 2 + jose-emilio-munoz-lopez,-arm: 1 + julie-gaskin: 4 + julio-suarez: 5 + kasper-mecklenburg: 1 + kieran-hejmadi: 1 + koki-mitsunami: 2 + konstantinos-margaritis: 7 + kristof-beyls: 1 + liliya-wu: 1 + mathias-brossard: 1 + michael-hall: 5 + nikhil-gupta,-pareena-verma,-nobel-chowdary-mandepudi,-ravi-malhotra: 1 + nobel-chowdary-mandepudi: 1 + odin-shen: 1 + owen-wu,-arm: 2 + pareena-verma: 34 + pareena-verma,-annie-tallund: 1 + pareena-verma,-jason-andrews,-and-zach-lasiuk: 1 + pareena-verma,-joe-stech,-adnan-alsinan: 1 + paul-howard: 1 + pranay-bakre: 4 + pranay-bakre,-masoud-koleini,-nobel-chowdary-mandepudi,-na-li: 1 + preema-merlin-dsouza: 1 + przemyslaw-wirkus: 2 + rin-dobrescu: 1 + roberto-lopez-mendez: 2 + ronan-synnott: 46 + thirdai: 1 + tianyu-li: 1 + tom-pilar: 1 + uma-ramalingam: 1 + varun-chari,-albin-bernhardsson: 1 + varun-chari,-pareena-verma: 1 + visualsilicon: 1 + willen-yang: 1 + ying-yu: 1 + ying-yu,-arm: 1 + zach-lasiuk: 1 + zhengjun-xing: 2 + issues: + avg_close_time_hrs: 0 + num_issues: 15 + percent_closed_vs_total: 0.0 diff --git a/themes/arm-design-system-hugo-theme/layouts/index.html b/themes/arm-design-system-hugo-theme/layouts/index.html index 8c54d7d7e4..ba36f8a326 100644 --- a/themes/arm-design-system-hugo-theme/layouts/index.html +++ b/themes/arm-design-system-hugo-theme/layouts/index.html @@ -93,6 +93,7 @@

Install Guides

All content is covered by the Creative Commons License{{partial "general-formatting/external-link.html"}}.

+

Learning Paths may contain AI-generated content.

diff --git a/themes/arm-design-system-hugo-theme/layouts/learning-paths/learningpathall.html b/themes/arm-design-system-hugo-theme/layouts/learning-paths/learningpathall.html index 8d0db8fd44..52a19db71f 100644 --- a/themes/arm-design-system-hugo-theme/layouts/learning-paths/learningpathall.html +++ b/themes/arm-design-system-hugo-theme/layouts/learning-paths/learningpathall.html @@ -37,6 +37,8 @@ {{ with .Site.GetPage $thisdir}}

{{ .Params.Title }}
+ + {{ end }} diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/config-rag.html b/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/config-rag.html index 0f266dce4a..369b63bc5f 100644 --- a/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/config-rag.html +++ b/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/config-rag.html @@ -22,7 +22,7 @@

RAG Vector Store Details

-

This application uses all data on learn.arm.com +

This application uses all data on learn.arm.com as the RAG dataset. The content across Learning Paths and Install Guides is segmented into labeled chunks, and vector embeddings are generated. This LLM demo references the FAISS vector store to answer your query.

diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html b/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html index 5c821e9ceb..c3b1263278 100644 --- a/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html +++ b/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html @@ -55,36 +55,36 @@

Share

Share what you've learned.