diff --git a/.wordlist.txt b/.wordlist.txt
index 5afe320b18..a8093169d0 100644
--- a/.wordlist.txt
+++ b/.wordlist.txt
@@ -4559,7 +4559,7 @@ qdisc
ras
rcu
regmap
-rgerganov’s
+rgerganov's
rotocol
rpcgss
rpmh
@@ -4588,3 +4588,6 @@ vmscan
workqueue
xdp
xhci
+JFR
+conv
+servlet
\ No newline at end of file
diff --git a/assets/contributors.csv b/assets/contributors.csv
index 1317f49b31..ca9d8d4a27 100644
--- a/assets/contributors.csv
+++ b/assets/contributors.csv
@@ -92,6 +92,6 @@ Aude Vuilliomenet,Arm,,,,
Andrew Kilroy,Arm,,,,
Peter Harris,Arm,,,,
Chenying Kuo,Adlink,evshary,evshary,,
-William Liang,,wyliang,,,
+William Liang,,,wyliang,,
Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheedbrown/,,
Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,,
\ No newline at end of file
diff --git a/content/learning-paths/embedded-and-microcontrollers/_index.md b/content/learning-paths/embedded-and-microcontrollers/_index.md
index 8ee2672ec5..dc4f325370 100644
--- a/content/learning-paths/embedded-and-microcontrollers/_index.md
+++ b/content/learning-paths/embedded-and-microcontrollers/_index.md
@@ -49,7 +49,7 @@ tools_software_languages_filter:
- Coding: 26
- Containerd: 1
- DetectNet: 1
-- Docker: 9
+- Docker: 10
- DSTREAM: 2
- Edge AI: 1
- Edge Impulse: 1
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
deleted file mode 100644
index 7345c0c727..0000000000
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Overview
-weight: 2
-
-### FIXED, DO NOT MODIFY
-layout: learningpathall
----
-
-## Visualizing ML on Embedded Devices
-
-Selecting the best hardware for machine learning (ML) models depends on effective tools. You can visualize ML performance early in the development cycle by using Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs).
-
-## TinyML
-
-This Learning Path uses TinyML. TinyML is machine learning tailored to function on devices with limited resources, constrained memory, low power, and fewer processing capabilities.
-
-For a learning path focused on creating and deploying your own TinyML models, please see [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/)
-
-## Benefits and applications
-
-New products, like Arm's [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU are available on FVPs earlier than on physical devices. FVPs also have a graphical user interface (GUI), which is useful for for ML performance visualization due to:
-- visual confirmation that your ML model is running on the desired device,
-- clearly indicated instruction counts,
-- confirmation of total execution time and
-- visually appealing output for prototypes and demos.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
deleted file mode 100644
index 2787107f19..0000000000
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-# User change
-title: "Install ExecuTorch"
-
-weight: 3
-
-# Do not modify these elements
-layout: "learningpathall"
----
-
-In this section, you will prepare a development environment to compile a machine learning model.
-
-## Introduction to ExecuTorch
-
-ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch models on resource-constrained devices. It enables machine learning inference on embedded and edge platforms, making it well-suited for Arm-based hardware. Since Arm processors are widely used in mobile, IoT, and embedded applications, ExecuTorch leverages Arm's efficient CPU architectures to deliver optimized performance while maintaining low power consumption. By integrating with Arm's compute libraries, it ensures smooth execution of AI workloads on Arm-powered devices, from Cortex-M microcontrollers to Cortex-A application processors.
-
-## Install dependencies
-
-These instructions have been tested on Ubuntu 22.04, 24.04, and on Windows Subsystem for Linux (WSL).
-
-Python3 is required and comes installed with Ubuntu, but some additional packages are needed:
-
-```bash
-sudo apt update
-sudo apt install python-is-python3 python3-dev python3-venv gcc g++ make -y
-```
-
-## Create a virtual environment
-
-Create a Python virtual environment using `python venv`:
-
-```console
-python3 -m venv $HOME/executorch-venv
-source $HOME/executorch-venv/bin/activate
-```
-The prompt of your terminal now has `(executorch)` as a prefix to indicate the virtual environment is active.
-
-
-## Install Executorch
-
-From within the Python virtual environment, run the commands below to download the ExecuTorch repository and install the required packages:
-
-``` bash
-cd $HOME
-git clone https://github.com/pytorch/executorch.git
-cd executorch
-```
-
-Run the commands below to set up the ExecuTorch internal dependencies:
-
-```bash
-git submodule sync
-git submodule update --init --recursive
-./install_executorch.sh
-```
-
-{{% notice Note %}}
-If you run into an issue of `buck` running in a stale environment, reset it by running the following instructions:
-
-```bash
-ps aux | grep buck
-pkill -f buck
-```
-{{% /notice %}}
-
-After running the commands, `executorch` should be listed upon running `pip list`:
-
-```bash
-pip list | grep executorch
-```
-
-```output
-executorch 0.8.0a0+92fb0cc
-```
-
-## Next Steps
-
-Proceed to the next section to learn about and set up the virtualized hardware.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
new file mode 100644
index 0000000000..b087e70934
--- /dev/null
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
@@ -0,0 +1,61 @@
+---
+title: Overview
+weight: 2
+
+### FIXED, DO NOT MODIFY
+layout: learningpathall
+---
+## Simulate and evaluate TinyML performance on Arm virtual hardware
+
+In this section, you’ll learn how TinyML, ExecuTorch, and Arm Fixed Virtual Platforms work together to simulate embedded AI workloads before hardware is available.
+
+Choosing the right hardware for your machine learning (ML) model starts with having the right tools. In many cases, you need to test and iterate before your target hardware is even available, especially when working with cutting-edge accelerators like the Ethos-U NPU.
+
+Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs) let you visualize and test model performance before any physical hardware is available.
+
+ By simulating hardware behavior at the system level, FVPs allow you to:
+
+- Benchmark inference speed and measure operator-level performance
+- Identify which operations are delegated to the NPU and which execute on the CPU
+- Validate end-to-end integration between components like ExecuTorch and Arm NN
+- Iterate faster by debugging and optimizing your workload without relying on hardware
+
+This makes FVPs a crucial tool for embedded ML workflows where precision, portability, and early validation matter.
+
+## What is TinyML?
+
+TinyML is machine learning optimized to run on low-power, resource-constrained devices such as Arm Cortex-M microcontrollers and NPUs like the Ethos-U. These models must fit within tight memory and compute budgets, making them ideal for embedded systems.
+
+This Learning Path focuses on using TinyML models with virtualized Arm hardware to simulate real-world AI workloads on microcontrollers and NPUs.
+
+If you're looking to build and train your own TinyML models, follow the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
+
+## What is ExecuTorch?
+
+ExecuTorch is a lightweight runtime for running PyTorch models on embedded and edge devices. It supports efficient model inference on a range of Arm processors, ranging from Cortex-M CPUs to Ethos-U NPUs, with support for hybrid CPU+accelerator execution.
+
+ExecuTorch provides:
+
+- Ahead-of-time (AOT) compilation for faster inference
+- Delegation of selected operators to accelerators like Ethos-U
+- Tight integration with Arm compute libraries
+
+## Why use Arm Fixed Virtual Platforms?
+
+Arm Fixed Virtual Platforms (FVPs) are virtual hardware models used to simulate Arm-based systems like the Corstone-320. They allow developers to validate and tune software before silicon is available, which is especially important when targeting newly-released accelerators like the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU.
+
+These virtual platforms also include a built-in graphical user interface (GUI) that helps you:
+
+- Confirm your model is running on the intended virtual hardware
+- Visualize instruction counts
+- Review total execution time
+- Capture clear outputs for demos and prototypes
+
+## What is Corstone-320?
+
+The Corstone-320 FVP is a virtual model of an Arm-based microcontroller system optimized for AI and TinyML workloads. It supports Cortex-M CPUs and the Ethos-U NPU, making it ideal for early testing, performance tuning, and validation of embedded AI applications, all before physical hardware is available.
+
+The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation. For more information, see the [Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
+
+## What's next?
+In the next section, you'll explore how ExecuTorch compiles and deploys models to run efficiently on simulated hardware.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
deleted file mode 100644
index bc80217465..0000000000
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-# User change
-title: "Set up the Corstone-320 FVP on Linux"
-
-weight: 4 # 1 is first, 2 is second, etc.
-
-# Do not modify these elements
-layout: "learningpathall"
----
-
-In this section, you will run scripts to set up the Corstone-320 reference package.
-
-The Corstone-320 Fixed Virtual Platform (FVP) is a pre-silicon software development environment for Arm-based microcontrollers. It provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm's Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
-
-The Corstone reference system is provided free of charge, although you will have to accept the license in the next step. For more information on Corstone-320, check out the [official documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
-
-## Corstone-320 FVP Setup for ExecuTorch
-
-{{% notice macOS %}}
-
-Setting up FVPs on MacOS requires some extra steps, outlined in GitHub repo [FVPs-on-Mac](https://github.com/Arm-Examples/FVPs-on-Mac/). macOS users must do this first, before setting up the Corstone-320 FVP.
-
-{{% /notice %}}
-
-Navigate to the Arm examples directory in the ExecuTorch repository. Run the following command.
-
-```bash
-cd $HOME/executorch/examples/arm
-./setup.sh --i-agree-to-the-contained-eula
-```
-
-After the script has finished running, it prints a command to run to finalize the installation. This step adds the FVP executables to your system path.
-
-```bash
-source $HOME/executorch/examples/arm/ethos-u-scratch/setup_path.sh
-```
-
-Test that the setup was successful by running the `run.sh` script for Ethos-U85, which is the target device for Corstone-320:
-
-{{% notice macOS %}}
-
-**Start Docker:** on macOS, FVPs run inside a Docker container.
-
-{{% /notice %}}
-
-```bash
- ./examples/arm/run.sh --target=ethos-u85-256
-```
-
-You will see a number of examples run on the FVP.
-
-This confirms the installation, so you can now proceed to the next section.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
new file mode 100644
index 0000000000..6ae810e4b0
--- /dev/null
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
@@ -0,0 +1,51 @@
+---
+# User change
+title: "Understand the ExecuTorch workflow"
+
+weight: 3
+
+# Do not modify these elements
+layout: "learningpathall"
+---
+## Overview
+
+Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware. ExecuTorch uses ahead-of-time (AOT) compilation to transform PyTorch models into optimized operator graphs that run efficiently on resource-constrained systems. The workflow supports hybrid execution across CPU and NPU cores, allowing you to profile, debug, and deploy TinyML workloads with low runtime overhead and high portability across Arm microcontrollers.
+
+## ExecuTorch in three steps
+
+ExecuTorch works in three main steps:
+
+**Step 1: Export the model**
+
+ - Convert a trained PyTorch model into an operator graph
+ - Identify operators that can be offloaded to the Ethos-U NPU (for example, ReLU, conv, and quantize)
+
+**Step 2: Compile with the AOT compiler**
+
+ - Translate the operator graph into an optimized, quantized format
+ - Use `--delegate` to move eligible operations to the Ethos-U accelerator
+ - Save the compiled output as a `.pte` file
+
+**Step 3: Deploy and run**
+
+ - Execute the compiled model on an FVP or physical target
+ - The Ethos-U NPU runs delegated operators - all others run on the Cortex-M CPU
+
+For more detail, see the [ExecuTorch documentation](https://docs.pytorch.org/executorch/stable/intro-how-it-works.html).
+
+
+## A visual overview
+
+The diagram below summarizes the ExecuTorch workflow from model export to deployment. It shows how a trained PyTorch model is transformed into an optimized, quantized format and deployed to a target system such as an Arm Fixed Virtual Platform (FVP).
+
+- On the left, the model is exported into a graph of operators, with eligible layers flagged for NPU acceleration.
+- In the center, the AOT compiler optimizes and delegates operations, producing a `.pte` file ready for deployment.
+- On the right, the model is executed on embedded Arm hardware, where delegated operators run on the Ethos-U NPU, and the rest are handled by the Cortex-M CPU.
+
+This three-step workflow ensures your TinyML models are performance-tuned and hardware-aware before deployment—even without access to physical silicon.
+
+
+
+## What's next?
+
+Now that you understand how ExecuTorch works, you're ready to set up your environment and install the tools.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
new file mode 100644
index 0000000000..fa02f06a35
--- /dev/null
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
@@ -0,0 +1,84 @@
+---
+# User change
+title: "Set up your ExecuTorch environment"
+
+weight: 4
+
+# Do not modify these elements
+layout: "learningpathall"
+---
+## Set up overview
+
+Before you can deploy and test models with ExecuTorch, you need to set up your local development environment. This section walks you through installing system dependencies, creating a virtual environment, and cloning the ExecuTorch repository on Ubuntu or WSL. Once complete, you'll be ready to run TinyML models on a virtual Arm platform.
+
+## Install system dependencies
+
+{{< notice Note >}}
+Make sure Python 3 is installed. It comes pre-installed on most versions of Ubuntu.
+{{< /notice >}}
+
+These instructions have been tested on:
+
+- Ubuntu 22.04 and 24.04
+- Windows Subsystem for Linux (WSL)
+
+Run the following commands to install the dependencies:
+
+```bash
+sudo apt update
+sudo apt install python-is-python3 python3-dev python3-venv gcc g++ make -y
+```
+
+## Create a virtual environment
+
+Create and activate a Python virtual environment:
+
+```console
+python3 -m venv $HOME/executorch-venv
+source $HOME/executorch-venv/bin/activate
+```
+Your shell prompt should now start with `(executorch)` to indicate the environment is active.
+
+## Install ExecuTorch
+
+Clone the ExecuTorch repository and install dependencies:
+
+``` bash
+cd $HOME
+git clone https://github.com/pytorch/executorch.git
+cd executorch
+```
+
+Set up internal submodules:
+
+```bash
+git submodule sync
+git submodule update --init --recursive
+./install_executorch.sh
+```
+
+{{% notice Tip %}}
+If you encounter a stale `buck` environment, reset it using:
+
+```bash
+ps aux | grep buck
+pkill -f buck
+```
+{{% /notice %}}
+
+## Verify the installation:
+
+Check that ExecuTorch is correctly installed:
+
+```bash
+pip list | grep executorch
+```
+Expected output:
+
+```output
+executorch 0.8.0a0+92fb0cc
+```
+
+## What's next?
+
+Now that ExecuTorch is installed, you're ready to simulate your TinyML model on an Arm Fixed Virtual Platform (FVP). In the next section, you'll configure and launch a Fixed Virtual Platform.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
deleted file mode 100644
index e2061aa1e2..0000000000
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
+++ /dev/null
@@ -1,58 +0,0 @@
----
-# User change
-title: "How ExecuTorch Works"
-
-weight: 5 # 1 is first, 2 is second, etc.
-
-# Do not modify these elements
-layout: "learningpathall"
----
-
-To get a better understanding of [How ExecuTorch Works](https://docs.pytorch.org/executorch/stable/intro-how-it-works.html) refer to the official PyTorch Documentation. A summary is provided here for your reference:
-
-1. **Export the model:**
- * Generate a Graph
- * A graph is series of operators (ReLU, quantize, etc.) eligible for delegation to an accelerator
- * Your goal is to identify operators for acceleration on the Ethos-U NPU
-2. **Compile to ExecuTorch:**
- * This is the ahead-of-time compiler
- * This is why ExecuTorch inference is faster than PyTorch inference
- * Delegate operators to an accelerator, like the Ethos-U NPU
-3. **Run on targeted device:**
- * Deploy the ML model to the Fixed Virtual Platform (FVP) or physical device
- * Execute operators on the CPU and delegated operators on the Ethos-U NPU
-
-**Diagram of How ExecuTorch Works**
-
-
-## Deploy a TinyML Model
-
-With your development environment set up, you can deploy a simple PyTorch model.
-
-This example deploys the [MobileNet V2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/) computer vision model. The model is a convolutional neural network (CNN) that extracts visual features from an image. It is used for image classification and object detection.
-
-The actual Python code for the MobileNet V2 model is in your local `executorch` repo: [executorch/examples/models/mobilenet_v2/model.py](https://github.com/pytorch/executorch/blob/main/examples/models/mobilenet_v2/model.py). You can deploy it using [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh), just like you did in the previous step, with some extra parameters:
-
-{{% notice macOS %}}
-
-**Start Docker:** on macOS, FVPs run inside a Docker container.
-
-{{% /notice %}}
-
-```bash
-./examples/arm/run.sh \
---aot_arm_compiler_flags="--delegate --quantize --intermediates mv2_u85/ --debug --evaluate" \
---output=mv2_u85 \
---target=ethos-u85-128 \
---model_name=mv2
-```
-
-**Explanation of run.sh Parameters**
-|run.sh Parameter|Meaning / Context|
-|--------------|-----------------|
-|--aot_arm_compiler_flags|Passes a string of compiler options to the ExecuTorch Ahead-of-Time (AOT) compiler|
-|--delegate|Enables backend delegation|
-|--quantize|Converts the floating-point model to int8 quantized format using post-training quantization **Essential for running on NPUs**|
-|--intermediates mv2_u85/|Directory where intermediate files (e.g., TOSA, YAMLs, debug graphs) will be saved Useful output files for **manual debugging**|
-|--debug|Verbose debugging logging|
-|--evaluate|Validates model output, provides timing estimates|
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
new file mode 100644
index 0000000000..1208f92761
--- /dev/null
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
@@ -0,0 +1,76 @@
+---
+# User change
+title: "Set up the Corstone-320 Fixed Virtual Platform"
+
+weight: 5
+
+# Do not modify these elements
+layout: "learningpathall"
+---
+
+## Get started with the Corstone-320 FVP
+
+In this section, you’ll install and configure the Corstone-320 FVP to simulate an Arm-based embedded system. This lets you run ExecuTorch-compiled models in a virtual environment without any hardware required.
+
+## Install the Corstone-320 FVP
+
+Before you begin, make sure you’ve completed the steps in the previous section to install ExecuTorch.
+
+{{< notice Note >}}
+If you're using macOS, you need to perform additional setup to support FVP execution.
+
+See the FVPs-on-Mac GitHub repo for instructions before continuing.
+{{< /notice >}}
+
+Run the setup script provided in the ExecuTorch examples directory:
+
+```bash
+cd $HOME/executorch/examples/arm
+./setup.sh --i-agree-to-the-contained-eula
+```
+
+The `--i-agree-to-the-contained-eula` flag is required to run the script. It indicates your acceptance of Arm’s licensing terms for using the FVP.
+
+This installs the FVP and extracts all necessary components. It also prints a command to configure your shell environment.
+
+## Add the FVP to your system PATH
+
+Run the following command to update your environment:
+
+```bash
+source $HOME/executorch/examples/arm/ethos-u-scratch/setup_path.sh
+```
+
+This ensures the FVP binaries are available in your terminal session.
+
+## Verify your setup
+
+Run a quick test to check that the FVP is working:
+
+
+```bash
+./examples/arm/run.sh --target=ethos-u85-256
+```
+
+This executes a built-in example on the Ethos-U85 configuration of the Corstone-320 platform.
+
+{{% notice macOS %}}
+
+On macOS, make sure Docker is running. FVPs execute inside a Docker container on macOS systems.
+
+{{% /notice %}}
+
+If you see example output from the platform, the setup is complete.
+
+## Next steps
+You’re now ready to deploy and run your own TinyML model using ExecuTorch on the Corstone-320 FVP.
+
+
+
+
+
+
+
+
+
+
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md
new file mode 100644
index 0000000000..cc6b3d8e17
--- /dev/null
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md
@@ -0,0 +1,64 @@
+---
+# User change
+title: "Deploy and run Mobilenet V2 on the Corstone-320 FVP"
+
+weight: 6 # 1 is first, 2 is second, etc.
+
+# Do not modify these elements
+layout: "learningpathall"
+---
+## Deploy Mobilenet V2 with ExecuTorch
+
+With your environment and FVP now set up, you're ready to deploy and run a real TinyML model using ExecuTorch.
+
+This example deploys the [MobileNet V2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/) computer vision model. The model is a convolutional neural network (CNN) that extracts visual features from an image. It is used for image classification and object detection.
+
+The Python code for the MobileNet V2 model is in your local `executorch` repo: [executorch/examples/models/mobilenet_v2/model.py](https://github.com/pytorch/executorch/blob/main/examples/models/mobilenet_v2/model.py). You can deploy it using [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh), just like you did in the previous step, with some extra parameters:
+
+{{% notice Tip %}}
+
+On macOS, make sure Docker is running. FVPs execute inside a Docker container.
+
+{{% /notice %}}
+
+```bash
+./examples/arm/run.sh \
+--aot_arm_compiler_flags="--delegate --quantize --intermediates mv2_u85/ --debug --evaluate" \
+--output=mv2_u85 \
+--target=ethos-u85-128 \
+--model_name=mv2
+```
+
+The `--model_name=mv2` flag tells `run.sh` to use the Mobilenet V2 model defined in examples/models/mobilenet_v2/model.py.
+
+**Explanation of run.sh Parameters**
+|run.sh Parameter|Meaning / Context|
+|--------------|-----------------|
+|--aot_arm_compiler_flags|Passes a string of compiler options to the ExecuTorch Ahead-of-Time (AOT) compiler|
+|--delegate|Enables backend delegation|
+|--quantize|Converts the floating-point model to int8 quantized format using post-training quantization **Essential for running on NPUs**|
+|--intermediates mv2_u85/|Directory where intermediate files (e.g., TOSA, YAMLs, debug graphs) will be saved Useful output files for **manual debugging**|
+|--debug|Verbose debugging logging|
+|--evaluate|Validates model output, provides timing estimates|
+
+## What to expect
+
+ExecuTorch will:
+
+- Compile the PyTorch model to .pte format
+- Generate intermediate files (YAMLs, graphs, etc.)
+- Run the compiled model on the FVP
+- Output execution timing, operator delegation, and performance stats
+
+You should see output like:
+
+```bash
+Batch Inference time 4.94 ms, 202.34 inferences/s
+Total delegated subgraphs: 1
+Number of delegated nodes: 419
+```
+
+A high number of delegated nodes means the majority of model execution was successfully offloaded to the Ethos-U NPU for acceleration. This confirms that the model was successfully compiled, deployed, and run with NPU acceleration.
+
+## Next steps
+If you’d like to visualize instruction counts and performance using the GUI, continue to the next (optional) section.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-configure-fvp-gui.md
similarity index 67%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-configure-fvp-gui.md
index e3902cafd4..6e6ac5cf91 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-configure-fvp-gui.md
@@ -1,24 +1,28 @@
---
# User change
-title: "Configure the FVP GUI (optional)"
+title: "Enable GUI and deploy a model on Corstone-320 FVP"
-weight: 6 # 1 is first, 2 is second, etc.
+weight: 7 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
---
+## Visualize model execution using the FVP GUI
+
+You’ve successfully deployed a model on the Corstone-320 FVP from the command line. In this step, you’ll enable the platform’s built-in graphical output and re-run the model to observe instruction-level execution metrics in a windowed display.
+
## Find your IP address
Note down your computer's IP address:
```bash
ip addr show
```
-Note down the IP address of your active network interface (inet) which you will use later to pass as an argument to the FVP.
+You'll use the IP address of your active network interface (inet) later to pass as an argument to the FVP.
-{{% notice macOS %}}
+{{% notice Note %}}
-Note down your `en0` IP address (or whichever network adapter is active):
+For macOS, note down your `en0` IP address (or whichever network adapter is active):
```bash
ipconfig getifaddr en0 # Returns your Mac's WiFi IP address
@@ -26,7 +30,7 @@ ipconfig getifaddr en0 # Returns your Mac's WiFi IP address
{{% /notice %}}
-## Enable the FVP's GUI
+## Configure the FVP for GUI output
Edit the following parameters in your locally checked out [executorch/backends/arm/scripts/run_fvp.sh](https://github.com/pytorch/executorch/blob/d5fe5faadb8a46375d925b18827493cd65ec84ce/backends/arm/scripts/run_fvp.sh#L97-L102) file, to enable the Mobilenet V2 output on the FVP's GUI:
@@ -55,14 +59,26 @@ Edit the following parameters in your locally checked out [executorch/backends/a
## Deploy the model
-{{% notice macOS %}}
+Now run the Mobilenet V2 computer vision model, using [executorch/examples/arm/run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh):
+```bash
+./examples/arm/run.sh \
+--aot_arm_compiler_flags="--delegate --quantize --intermediates mv2_u85/ --debug --evaluate" \
+--output=mv2_u85 \
+--target=ethos-u85-128 \
+--model_name=mv2
+```
+
+Observe that the FVP loads the model file, compiles the PyTorch model to ExecuTorch `.pte` format and then shows an instruction count in the top right of the GUI:
+
+
-- **Start Docker:** on macOS, FVPs run inside a Docker container.
+{{% notice Note %}}
- **Do not use Colima Docker!**
+For macOS users, follow these instructions:
- - Make sure to use an [official version of Docker](https://www.docker.com/products/docker-desktop/) and not a free version like the [Colima](https://github.com/abiosoft/colima?tab=readme-ov-file) Docker container runtime
- - `run.sh` assumes Docker Desktop style networking (`host.docker.internal`) which breaks with Colima
+- Start Docker. FVPs run inside a Docker container.
+- Make sure to use an [official version of Docker](https://www.docker.com/products/docker-desktop/) and not a free version like the [Colima](https://github.com/abiosoft/colima?tab=readme-ov-file) Docker container runtime
+ - `run.sh` assumes Docker Desktop style networking (`host.docker.internal`) which breaks with Colima
- Colima then breaks the FVP GUI
- **Start XQuartz:** on macOS, the FVP GUI runs using XQuartz.
@@ -73,16 +89,3 @@ Edit the following parameters in your locally checked out [executorch/backends/a
xhost + 127.0.0.1 # The Docker container seems to proxy through localhost
```
{{% /notice %}}
-
-Now run the Mobilenet V2 computer vision model, using [executorch/examples/arm/run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh):
-```bash
-./examples/arm/run.sh \
---aot_arm_compiler_flags="--delegate --quantize --intermediates mv2_u85/ --debug --evaluate" \
---output=mv2_u85 \
---target=ethos-u85-128 \
---model_name=mv2
-```
-
-Observe that the FVP loads the model file, compiles the PyTorch model to ExecuTorch `.pte` format and then shows an instruction count in the top right of the GUI:
-
-
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
similarity index 93%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
index 2ab22dbdf2..1034282f82 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
@@ -2,20 +2,24 @@
# User change
title: "Evaluate Ethos-U Performance"
-weight: 7 # 1 is first, 2 is second, etc.
+weight: 8 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
---
+## Interpreting the results
+
+Now that you've successfully deployed and executed the MobileNet V2 model on the Corstone-320 FVP, this section walks you through how to interpret the resulting performance data. This includes inference time, operator delegation, and hardware-level metrics from the Ethos-U NPU.
+
## Observe Ahead-of-Time Compilation
-- The below output snippet from [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh) is how you can confirm ahead-of-time compilation
-- Specifically you want to see that the original PyTorch model was converted to an ExecuTorch `.pte` file
+- The following output from [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh) confirms that Ahead-of-Time (AOT) compilation was successful.
+- Specifically you want to confirm that the original PyTorch model was compiled into an ExecuTorch `.pte` file
- For the MobileNet V2 example, the compiled ExecuTorch file will be output as `mv2_arm_delegate_ethos-u85-128.pte`
{{% notice Note %}}
-In the below sample outputs, the `executorch` directory path is indicated as `/path/to/executorch`. Your actual path will depend on where you cloned your local copy of the [executorch repo](https://github.com/pytorch/executorch/tree/main).
+In the examples below, `/path/to/executorch` represents the directory where you cloned your local copy of the [ExecuTorch repo](https://github.com/pytorch/executorch/tree/main). Replace it with your actual path when running commands or reviewing output.
{{% /notice %}}
@@ -162,4 +166,6 @@ I [executorch:arm_perf_monitor.cpp:184] ethosu_pmu_cntr4 : 130
|ethosu_pmu_cntr3|External DRAM write beats(ETHOSU_PMU_EXT_WR_DATA_BEAT_WRITTEN)|Number of write data beats to external memory.|Helps detect offloading or insufficient SRAM.|
|ethosu_pmu_cntr4|Idle cycles(ETHOSU_PMU_NPU_IDLE)|Number of cycles where the NPU had no work scheduled (i.e., idle).|High idle count = possible pipeline stalls or bad scheduling.|
-In this learning path you have successfully learned how to deploy a MobileNet V2 Model using ExecuTorch on Arm's Corstone-320 FVP.
+## Review
+
+In this Learning Path, you have learned how to deploy a MobileNet V2 model using ExecuTorch on Arm's Corstone-320 FVP. You're now ready to apply what you've learned to other models and configurations using ExecuTorch.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
index 613a355ee5..0127cde363 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
@@ -1,23 +1,19 @@
---
-title: Visualizing Ethos-U Performance on Arm FVPs
-
-draft: true
-cascade:
- draft: true
+title: Visualize Ethos-U NPU performance with ExecuTorch on Arm FVPs
minutes_to_complete: 120
-who_is_this_for: This is an introductory topic for developers and data scientists new to Tiny Machine Learning (TinyML), who want to understand and visualize ExecuTorch performance on a virtual device.
+who_is_this_for: This is an introductory topic for developers and data scientists who are new to TinyML and want to visualize ExecuTorch model performance on virtual Arm hardware.
learning_objectives:
- - Identify suitable Arm-based devices for TinyML applications.
- - Install Fixed Virtual Platforms (FVPs).
- - Deploy a TinyML ExecuTorch model to a Corstone-320 FVP.
- - Observe model execution on the FVP's graphical user interface (GUI).
+ - Identify Arm-based targets suitable for TinyML workloads
+ - Install and configure Fixed Virtual Platforms (FVPs)
+ - Deploy a TinyML model using ExecuTorch on a Corstone-320 FVP
+ - Visualize model execution using the FVP graphical interface
prerequisites:
- - Basic knowledge of Machine Learning concepts.
- - A computer running Linux or macOS.
+ - Familiarity with basic machine learning concepts
+ - A Linux or macOS computer with Python 3 installed
author: Waheed Brown
@@ -42,6 +38,7 @@ tools_software_languages:
- ExecuTorch
- Arm Compute Library
- GCC
+ - Docker
further_reading:
- resource:
diff --git a/content/learning-paths/servers-and-cloud-computing/_index.md b/content/learning-paths/servers-and-cloud-computing/_index.md
index 878d7bd782..792fa14883 100644
--- a/content/learning-paths/servers-and-cloud-computing/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/_index.md
@@ -47,7 +47,7 @@ tools_software_languages_filter:
- ASP.NET Core: 2
- Assembly: 4
- assembly: 1
-- Async-profiler: 1
+- async-profiler: 1
- AWS: 1
- AWS CDK: 2
- AWS CodeBuild: 1
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
index 357a8bdcd5..3ee259c09d 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
@@ -1,23 +1,22 @@
---
title: Create an Azure Linux 3.0 virtual machine with Cobalt 100 processors
-draft: true
-cascade:
- draft: true
+minutes_to_complete: 120
-minutes_to_complete: 120
-
-who_is_this_for: This Learning Path explains how to create a virtual machine on Azure running Azure Linux 3.0 on Cobalt 100 processors.
+who_is_this_for: This is an advanced topic for developers who want to run Azure Linux 3.0 on Arm-based Cobalt 100 processors in a custom virtual machine.
learning_objectives:
- - Use QEMU to create a raw disk image, boot a VM using an Aarch64 ISO, install the OS, and convert the raw disk image to VHD format.
- - Upload the VHD file to Azure and use the Azure Shared Image Gallery (SIG) to create a custom image.
- - Use the Azure CLI to create an Azure Linux 3.0 VM for Arm, using the custom image from the Azure SIG.
+ - Use QEMU to create a raw disk image
+ - Boot a virtual machine using an AArch64 ISO and install Azure Linux 3.0
+ - Convert the raw disk image to VHD format
+ - Upload the VHD file to Azure
+ - Use Azure Shared Image Gallery (SIG) to create a custom image
+ - Create an Azure Linux 3.0 virtual machine on Arm using the Azure CLI and the custom image
prerequisites:
- - A [Microsoft Azure](https://azure.microsoft.com/) account with permission to create resources, including instances using Cobalt 100 processors.
- - A Linux machine with [QEMU](https://www.qemu.org/download/) and the [Azure CLI](/install-guides/azure-cli/) installed and authenticated.
+ - A [Microsoft Azure](https://azure.microsoft.com/) account with permission to create resources, including instances using Cobalt 100 processors
+ - A Linux machine with [QEMU](https://www.qemu.org/download/) and the [Azure CLI](/install-guides/azure-cli/) installed and authenticated
author: Jason Andrews
@@ -38,19 +37,19 @@ operatingsystems:
further_reading:
- resource:
- title: Azure Virtual Machines documentation
+ title: Virtual machines in Azure
link: https://learn.microsoft.com/en-us/azure/virtual-machines/
type: documentation
- resource:
- title: Azure Shared Image Gallery documentation
+ title: Store and share images in an Azure Compute Gallery
link: https://learn.microsoft.com/en-us/azure/virtual-machines/shared-image-galleries
type: documentation
- resource:
- title: QEMU User Documentation
+ title: QEMU Documentation
link: https://wiki.qemu.org/Documentation
type: documentation
- resource:
- title: Upload a VHD to Azure and create an image
+ title: Upload a VHD to Azure or copy a managed disk to another region - Azure CLI
link: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/upload-vhd
type: documentation
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
index 0159ccae7d..58b6f28d74 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
@@ -6,74 +6,74 @@ weight: 3
layout: learningpathall
---
-You can view the Azure Linux 3.0 project on [GitHub](https://github.com/microsoft/azurelinux). There are links to the ISO downloads in the project README.
+## How do I create an Azure Linux image for Arm?
-Using QEMU, you can create a raw disk image and boot a virtual machine with the ISO to install the OS on the disk.
+You can view the Azure Linux 3.0 project on [GitHub](https://github.com/microsoft/azurelinux). The project README includes links to ISO downloads.
-Once the installation is complete, you can convert the raw disk to a fixed-size VHD, upload it to Azure Blob Storage, and then use the Azure CLI to create a custom Arm image.
+Using [QEMU](https://www.qemu.org/), you can create a raw disk image, boot a virtual machine with the ISO, and install the operating system. After installation is complete, you'll convert the image to a fixed-size VHD, upload it to Azure Blob Storage, and use the Azure CLI to create a custom Arm image.
-## Download and create a virtual disk file
+## How do I download the Azure Linux ISO and create a raw disk image?
-Use `wget` to download the Azure Linux ISO image file.
+Use `wget` to download the Azure Linux ISO image file:
```bash
wget https://aka.ms/azurelinux-3.0-aarch64.iso
```
-Use `qemu-img` to create a 32 GB empty raw disk image to install the OS.
-
-You can increase the disk size by modifying the value passed to `qemu-img`.
+Create a 32 GB empty raw disk image to install the OS:
```bash
qemu-img create -f raw azurelinux-arm64.raw 34359738368
```
-## Boot and install the OS
+{{% notice Note %}}
+You can change the disk size by adjusting the value passed to `qemu-img`. Ensure it meets the minimum disk size requirements for Azure (typically at least 30 GB).
+{{% /notice %}}
+
+
+## How do I install Azure Linux on a raw disk image using QEMU?
Use QEMU to boot the operating system in an emulated Arm VM.
```bash
-qemu-system-aarch64 \
- -machine virt \
- -cpu cortex-a72 \
- -m 4096 \
- -nographic \
- -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd \
- -drive if=none,file=azurelinux-arm64.raw,format=raw,id=hd0 \
- -device virtio-blk-device,drive=hd0 \
- -cdrom azurelinux-3.0-aarch64.iso \
- -netdev user,id=net0 \
+qemu-system-aarch64 \
+ -machine virt \
+ -cpu cortex-a72 \
+ -m 4096 \
+ -nographic \
+ -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd \
+ -drive if=none,file=azurelinux-arm64.raw,format=raw,id=hd0 \
+ -device virtio-blk-device,drive=hd0 \
+ -cdrom azurelinux-3.0-aarch64.iso \
+ -netdev user,id=net0 \
-device virtio-net-device,netdev=net0
```
-Navigate through the installer by entering the hostname, username, and password for the custom image.
-You should use the username of `azureuser` if you want match the instructions on the following pages.
-
-Be patient, it takes some time to complete the full installation.
+Follow the installer prompts to enter the hostname, username, and password. Use `azureuser` as the username to ensure compatibility with later steps.
-At the end of installation you are prompted for confirmation to reboot the system.
+{{% notice Note %}}The installation process takes several minutes.{{% /notice %}}
-Once the newly installed OS boots successfully, install the Azure Linux Agent for VM provisioning, and power off the VM.
+At the end of installation, confirm the reboot prompt. After rebooting into the newly-installed OS, install and enable the Azure Linux Agent:
```bash
-sudo dnf install WALinuxAgent -y
-sudo systemctl enable waagent
-sudo systemctl start waagent
+sudo dnf install WALinuxAgent -y
+sudo systemctl enable waagent
+sudo systemctl start waagent
sudo poweroff
```
-Be patient, it takes some time to install the packages and power off.
+{{% notice Note %}} It can take a few minutes to install the agent and power off the VM.{{% /notice %}}
-## Convert the raw disk to VHD Format
+## How do I convert a raw disk image to a fixed-size VHD for Azure?
-Now that the raw disk image is ready to be used, convert the image to fixed-size VHD, making it compatible with Azure.
+Now that the raw disk image is ready for you to use, convert it to fixed-size VHD, which makes it compatible with Azure.
```bash
qemu-img convert -f raw -o subformat=fixed,force_size -O vpc azurelinux-arm64.raw azurelinux-arm64.vhd
```
{{% notice Note %}}
-VHD files have 512 bytes of footer attached at the end. The `force_size` flag ensures that the exact virtual size specified is used for the final VHD file. Without this, QEMU may round the size or adjust for footer overhead (especially when converting from raw to VHD). The `force_size` flag forces the final image to match the original size. This flag makes the final VHD size a whole number in MB or GB, which is required for Azure.
+VHD files include a 512-byte footer at the end. The `force_size` flag ensures the final image size matches the requested virtual size. Without this, QEMU might round the size or adjust for footer overhead (especially when converting from raw to VHD). The `force_size` flag forces the final image to match the original size. This is required for Azure compatibility, as it avoids rounding errors and ensures the VHD ends at a whole MB or GB boundary.
{{% /notice %}}
-Next, you can save the image in your Azure account.
+In the next step, you'll upload the VHD image to Azure and register it as a custom image for use with Arm-based virtual machines.
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
index fa9b4854f7..65b0b4c00e 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
@@ -1,38 +1,56 @@
---
-title: "About Azure Linux"
+title: "Build and run Azure Linux 3.0 on an Arm-based Azure virtual machine"
weight: 2
layout: "learningpathall"
---
-## What is Azure Linux 3.0?
+## What is Azure Linux 3.0 and how can I use it?
-Azure Linux 3.0 is a Linux distribution developed and maintained by Microsoft, specifically designed for use on the Azure cloud platform. It is optimized for running cloud-native workloads, such as containers, microservices, and Kubernetes clusters, and emphasizes performance, security, and reliability. Azure Linux 3.0 provides native support for the Arm (AArch64) architecture, enabling efficient, scalable, and cost-effective deployments on Arm-based infrastructure within Azure.
+Azure Linux 3.0 is a Microsoft-developed Linux distribution designed for cloud-native workloads on the Azure platform. It is optimized for running containers, microservices, and Kubernetes clusters, with a focus on performance, security, and reliability.
-Currently, Azure Linux 3.0 is not available as a ready-made virtual machine image for Arm-based VMs in the Azure Marketplace. Only x86_64 images, published by Ntegral Inc., are offered. This means you cannot directly create an Azure Linux 3.0 VM for Arm from the Azure portal or CLI.
+Azure Linux 3.0 includes native support for the Arm architecture (AArch64), enabling efficient, scalable, and cost-effective deployments on Arm-based Azure infrastructure.
-However, you can still run Azure Linux 3.0 on Arm-based Azure VMs by creating your own disk image. Using QEMU, an open-source machine emulator and virtualizer, you can build a custom Azure Linux 3.0 Arm image locally. After building the image, you can upload it to your Azure account as a managed disk or custom image. This process allows you to deploy and manage Azure Linux 3.0 VMs on Arm infrastructure, even before official images are available.
+## Can I run Azure Linux 3.0 on Arm-based Azure virtual machines?
-This Learning Path guides you through the steps to build an Azure Linux 3.0 disk image with QEMU, upload it to Azure, and prepare it for use in creating virtual machines.
+At the time of writing, Azure Linux 3.0 isn't available as a prebuilt virtual machine image for Arm-based VMs in the Azure Marketplace. Only x86_64 images (published by Ntegral Inc.) are available. This means you can't directly create an Azure Linux 3.0 VM for Arm from the Azure portal or CLI.
-Following this process, you'll be able to create and run Azure Linux 3.0 VMs on Arm-based Azure infrastructure.
+## How can I create and use a custom Azure Linux image for Arm?
-To get started install the dependencies on your local Linux machine. The instructions work for both Arm or x86 running Ubuntu.
+To run Azure Linux 3.0 on an Arm-based VM, you'll need to build a custom image manually. Using [QEMU](https://www.qemu.org/), an open-source machine emulator and virtualizer, you can build the image locally. After the build completes, upload the resulting image to your Azure account as either a managed disk or a custom image resource. This process lets you deploy and manage Azure Linux 3.0 VMs on Arm-based Azure infrastructure, even before official images are published in the Marketplace. This gives you full control over image configuration and early access to Arm-native workloads.
+
+This Learning Path guides you through the steps to:
+
+- Build an Azure Linux 3.0 disk image with QEMU
+- Upload the image to Azure
+- Create a virtual machine from the custom image
+
+By the end of this process, you'll be able to run Azure Linux 3.0 VMs on Arm-based Azure infrastructure.
+
+## What tools do I need to build the Azure Linux image locally?
+
+You can build the image on either an Arm or x86 Ubuntu system. First, install the required tools:
+
+Install QEMU and related tools:
```bash
sudo apt update && sudo apt install qemu-system-arm qemu-system-aarch64 qemu-efi-aarch64 qemu-utils ovmf -y
```
-You also need to install the Azure CLI. Refer to [How to install the Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). You can also use the [Azure CLI install guide](/install-guides/azure-cli/) for Arm Linux systems.
+You'll also need the Azure CLI. To install it, follow the [Azure CLI install guide](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest).
+
+If you're using an Arm Linux machine, see the [Azure CLI install guide](/install-guides/azure-cli/).
+
+## How do I verify the Azure CLI installation?
-Make sure the CLI is working by running the version command and confirm the version is printed.
+After installing the CLI, verify it's working by running the following command:
```bash
az version
```
-You should see an output similar to:
+You should see an output similar to the following:
```output
{
@@ -43,4 +61,4 @@ You should see an output similar to:
}
```
-Continue to learn how to prepare the Azure Linux disk image.
\ No newline at end of file
+In the next section, you'll learn how to build the Azure Linux 3.0 disk image using QEMU.
\ No newline at end of file
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
index ab66336077..8bae3a4507 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
@@ -6,94 +6,109 @@ weight: 4
layout: learningpathall
---
-You can now use the Azure CLI to create a disk image in Azure and copy the local image to Azure.
+## How do I upload and register a VHD image in Azure?
-## Prepare Azure resources for the image
+You're now ready to use the Azure CLI to create and upload a custom disk image to Azure. In this section, you'll configure environment variables, provision the necessary Azure resources, and upload a `.vhd` file. Then, you'll use the Shared Image Gallery to register the image for use with custom virtual machines.
-Before uploading the VHD file to Azure storage, set the environment variables for the Azure CLI.
+## How do I set up environment variables for the Azure CLI?
+
+Before uploading your VHD file, set the environment variables for the Azure CLI:
```bash
-RESOURCE_GROUP="MyCustomARM64Group"
-LOCATION="centralindia"
-STORAGE_ACCOUNT="mycustomarm64storage"
-CONTAINER_NAME="mycustomarm64container"
-VHD_NAME="azurelinux-arm64.vhd"
-GALLERY_NAME="MyCustomARM64Gallery"
-IMAGE_DEF_NAME="MyAzureLinuxARM64Def"
-IMAGE_VERSION="1.0.0"
-PUBLISHER="custom"
-OFFER="custom-offer"
-SKU="custom-sku"
-OS_TYPE="Linux"
-ARCHITECTURE="Arm64"
-HYPERV_GEN="V2"
-STORAGE_ACCOUNT_TYPE="Standard_LRS"
-VM_NAME="MyAzureLinuxARMVM"
-ADMIN_USER="azureuser"
+RESOURCE_GROUP="MyCustomARM64Group"
+LOCATION="centralindia"
+STORAGE_ACCOUNT="mycustomarm64storage"
+CONTAINER_NAME="mycustomarm64container"
+VHD_NAME="azurelinux-arm64.vhd"
+GALLERY_NAME="MyCustomARM64Gallery"
+IMAGE_DEF_NAME="MyAzureLinuxARM64Def"
+IMAGE_VERSION="1.0.0"
+PUBLISHER="custom"
+OFFER="custom-offer"
+SKU="custom-sku"
+OS_TYPE="Linux"
+ARCHITECTURE="Arm64"
+HYPERV_GEN="V2"
+STORAGE_ACCOUNT_TYPE="Standard_LRS"
+VM_NAME="MyAzureLinuxARMVM"
+ADMIN_USER="azureuser"
VM_SIZE="Standard_D4ps_v6"
```
{{% notice Note %}}
-You can modify the environment variables such as RESOURCE_GROUP, VM_NAME, and LOCATION based on your naming preferences, region, and resource requirements.
+Modify the environment variables such as RESOURCE_GROUP, VM_NAME, and LOCATION to suit your naming preferences, region, and resource requirements.
{{% /notice %}}
-Make sure to login to Azure using the CLI.
+## How do I log in and create Azure resources?
+
+First, log in to Azure using the CLI:
```bash
az login
```
-If a link is printed, open it in a browser and enter the provided code to authenticate.
+If prompted, open the browser link and enter the verification code to authenticate.
-Create a new resource group. If you are using an existing resource group for the RESOURCE_GROUP environment variable you can skip this step.
+Then, create a new resource group. If you are using an existing resource group for the RESOURCE_GROUP environment variable, you can skip this step:
```bash
az group create --name "$RESOURCE_GROUP" --location "$LOCATION"
```
-Create Azure blob storage.
+Create a new storage account to store your image:
```bash
-az storage account create \
- --name "$STORAGE_ACCOUNT" \
- --resource-group "$RESOURCE_GROUP" \
- --location "$LOCATION" \
- --sku Standard_LRS \
+az storage account create \
+ --name "$STORAGE_ACCOUNT" \
+ --resource-group "$RESOURCE_GROUP" \
+ --location "$LOCATION" \
+ --sku Standard_LRS \
--kind StorageV2
```
-Create a blob container in the blob storage account.
+Next, create a blob container in the storage account:
```bash
-az storage container create \
- --name "$CONTAINER_NAME" \
+az storage container create \
+ --name "$CONTAINER_NAME" \
--account-name "$STORAGE_ACCOUNT"
```
-## Upload and save the image in Azure
+## How do I upload a VHD image to Azure Blob Storage?
+
+First, retrieve the storage account key:
+
+```bash
+STORAGE_KEY=$(az storage account keys list \
+ --resource-group "$RESOURCE_GROUP" \
+ --account-name "$STORAGE_ACCOUNT" \
+ --query '[0].value' --output tsv)
+```
-Upload the VHD file to Azure.
+Then upload your VHD file to Azure Blob Storage:
```bash
-az storage blob upload \
- --account-name "$STORAGE_ACCOUNT" \
- --container-name "$CONTAINER_NAME" \
- --name "$VHD_NAME" \
+az storage blob upload \
+ --account-name "$STORAGE_ACCOUNT" \
+ --container-name "$CONTAINER_NAME" \
+ --name "$VHD_NAME" \
--file ./azurelinux-arm64.vhd
```
-You can now use the Azure console to see the image in your Azure account.
+You can now use the Azure console to view the image in your Azure account.
-Next, create a custom VM image from this VHD, using Azure Shared Image Gallery (SIG).
+## How do I register a custom image in the Azure Shared Image Gallery?
+
+Create a custom VM image from the VHD, using the Azure Shared Image Gallery (SIG):
```bash
-az sig create \
- --resource-group "$RESOURCE_GROUP" \
- --gallery-name "$GALLERY_NAME" \
+az sig create \
+ --resource-group "$RESOURCE_GROUP" \
+ --gallery-name "$GALLERY_NAME" \
--location "$LOCATION"
```
-Create the image definition.
+Create the image definition:
```bash
az sig image-definition create \
@@ -108,7 +123,7 @@ az sig image-definition create \
--hyper-v-generation "$HYPERV_GEN"
```
-Create the image version to register the VHD as a version of the custom image.
+Create the image version from the uploaded VHD:
```bash
az sig image-version create \
@@ -119,18 +134,22 @@ az sig image-version create \
--location "$LOCATION" \
--os-vhd-uri "https://${STORAGE_ACCOUNT}.blob.core.windows.net/${CONTAINER_NAME}/${VHD_NAME}" \
--os-vhd-storage-account "$STORAGE_ACCOUNT" \
- --storage-account-type "$STORAGE_ACCOUNT_TYPE"
+ --storage-account-type "$STORAGE_ACCOUNT_TYPE"
```
-Once the image has been versioned, you can retrieve the unique image ID for use in VM creation.
+## How do I retrieve the image ID for VM creation?
+
+Once the image has been versioned, you can retrieve the unique image ID for use in VM creation:
```bash
-IMAGE_ID=$(az sig image-version show \
- --resource-group "$RESOURCE_GROUP" \
- --gallery-name "$GALLERY_NAME" \
- --gallery-image-definition "$IMAGE_DEF_NAME" \
+IMAGE_ID=$(az sig image-version show \
+ --resource-group "$RESOURCE_GROUP" \
+ --gallery-name "$GALLERY_NAME" \
+ --gallery-image-definition "$IMAGE_DEF_NAME" \
--gallery-image-version "$IMAGE_VERSION" \
--query "id" -o tsv)
```
-Next, you can create a virtual machine with the new image using the image ID.
\ No newline at end of file
+You'll use this ID to deploy a new virtual machine based on your custom image.
+
+You've successfully uploaded and registered a custom Arm64 VM image in Azure. In the next section, you'll learn how to create a virtual machine using this image.
\ No newline at end of file
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
index c8592c1f96..67d19f2655 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
@@ -6,29 +6,33 @@ weight: 5
layout: learningpathall
---
-## Create a virtual machine using the new image
+## How do I launch a virtual machine using my custom Azure image?
-You can now use the newly created Azure Linux image to create a virtual machine in Azure with Cobalt 100 processors. Confirm the VM is created by looking in your Azure account in the “Virtual Machines” section.
+Now that your image is registered, you can launch a new VM using the Azure CLI and the custom image ID. This example creates a Linux VM on Cobalt 100 Arm-based processors using the custom image you created earlier.
+
+## How do I create a virtual machine in Azure using a custom image?
+
+Use the following command to create a virtual machine using your custom image:
```bash
-az vm create \
- --resource-group "$RESOURCE_GROUP" \
- --name "$VM_NAME" \
- --image "$IMAGE_ID" \
- --size "$VM_SIZE" \
- --admin-username "$ADMIN_USER" \
- --generate-ssh-keys \
+az vm create \
+ --resource-group "$RESOURCE_GROUP" \
+ --name "$VM_NAME" \
+ --image "$IMAGE_ID" \
+ --size "$VM_SIZE" \
+ --admin-username "$ADMIN_USER" \
+ --generate-ssh-keys \
--public-ip-sku Standard
```
After the VM is successfully created, retrieve the public IP address.
```bash
-az vm show \
- --resource-group "$RESOURCE_GROUP" \
- --name "$VM_NAME" \
- --show-details \
- --query "publicIps" \
+az vm show \
+ --resource-group "$RESOURCE_GROUP" \
+ --name "$VM_NAME" \
+ --show-details \
+ --query "publicIps" \
-o tsv
```
@@ -38,7 +42,7 @@ Use the public IP address to SSH to the VM. Replace `` with t
ssh azureuser@
```
-After you login, print the machine information.
+After connecting, print the machine information:
```bash
uname -a
diff --git a/content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md b/content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md
index 6838a42e06..51791d684e 100644
--- a/content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md
+++ b/content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md
@@ -10,7 +10,7 @@ layout: learningpathall
The instructions in this Learning Path are for any Arm server running Ubuntu 24.04.2 LTS. You will need at least three Arm server instances with at least 64 cores and 128GB of RAM to run this example. The instructions have been tested on an AWS Graviton4 c8g.16xlarge instance
## Overview
-llama.cpp is a C++ library that enables efficient inference of LLaMA and similar large language models on CPUs, optimized for local and embedded environments. Just over a year ago from its publication date, rgerganov’s RPC code was merged into llama.cpp, enabling distributed inference of large LLMs across multiple CPU-based machines—even when the models don’t fit into the memory of a single machine. In this learning path, we’ll explore how to run a 405B parameter model on Arm-based CPUs.
+llama.cpp is a C++ library that enables efficient inference of LLaMA and similar large language models on CPUs, optimized for local and embedded environments. Just over a year ago from its publication date, rgerganov's RPC code was merged into llama.cpp, enabling distributed inference of large LLMs across multiple CPU-based machines—even when the models don’t fit into the memory of a single machine. In this learning path, we’ll explore how to run a 405B parameter model on Arm-based CPUs.
For the purposes of this demonstration, the following experimental setup will be used:
- Total number of instances: 3
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
index 6fbe8aeb81..5bdd5fa0ca 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
@@ -1,5 +1,5 @@
---
-title: Setup Tomcat Benchmark Environment
+title: Set up Tomcat benchmark environment
weight: 2
### FIXED, DO NOT MODIFY
@@ -8,43 +8,58 @@ layout: learningpathall
## Overview
-There are numerous performance analysis methods and tools for Java applications, among which the call stack flame graph method is regarded as a conventional entry-level approach. Therefore, generating flame graphs is considered a basic operation.
-Various methods and tools are available for generating Java flame graphs, including `async-profiler`, `Java Agent`, `jstack`, `JFR` (Java Flight Recorder), etc.
-This Learning Path focuses on introducing two simple and easy-to-use methods: `async-profiler` and `Java Agent`.
+Flame graphs are a widely used entry point for analyzing Java application performance. Tools for generating flame graphs include `async-profiler`, Java agents, `jstack`, and Java Flight Recorder (JFR). This Learning Path focuses on two practical approaches: using `async-profiler` and a Java agent.
-## Setup Benchmark Server - Tomcat
-- [Apache Tomcat](https://tomcat.apache.org/) is an open-source Java Servlet container that enables running Java web applications, handling HTTP requests and serving dynamic content.
-- As a core component in Java web development, Apache Tomcat supports Servlet, JSP, and WebSocket technologies, providing a lightweight runtime environment for web apps.
+In this section, you'll set up a benchmark environment using Apache Tomcat and `wrk2` to simulate HTTP load and evaluate performance on an Arm-based server.
+
+## Set up the Tomcat benchmark server
+[Apache Tomcat](https://tomcat.apache.org/) is an open-source Java Servlet container that runs Java web applications, handles HTTP requests, and serves dynamic content. It supports technologies such as Servlet, JSP, and WebSocket.
+
+## Install the Java Development Kit (JDK)
+
+Install OpenJDK 21 on your Arm-based Ubuntu server:
-1. Start by installing Java Development Kit (JDK) on your Arm-based server running Ubuntu:
```bash
sudo apt update
sudo apt install -y openjdk-21-jdk
```
-2. Next, you can install Tomcat by either [building it from source](https://github.com/apache/tomcat) or downloading the pre-built package simply from [the official website](https://tomcat.apache.org/whichversion.html)
+## Install Tomcat
+
+Download and extract Tomcat:
+
```bash
wget -c https://dlcdn.apache.org/tomcat/tomcat-11/v11.0.9/bin/apache-tomcat-11.0.9.tar.gz
tar xzf apache-tomcat-11.0.9.tar.gz
```
+Alternatively, you can build Tomcat [from source](https://github.com/apache/tomcat).
+
+## Enable access to Tomcat examples
+
+To access the built-in examples from your local network or external IP, use a text editor to modify the `context.xml` file by updating the `RemoteAddrValve` configuration to allow all IP addresses.
+
+The file is at:
-3. If you intend to access the built-in examples of Tomcat via an intranet IP or even an external IP, you need to modify a configuration file as shown:
```bash
-vi apache-tomcat-11.0.9/webapps/examples/META-INF/context.xml
+apache-tomcat-11.0.9/webapps/examples/META-INF/context.xml
```
-Then change the allow value as shown and save the changes:
-```output
-# change
-# to
+
+
+
+
+
-```
-Now you can start Tomcat Server:
+
+## Start the Tomcat server
+
+Start the server:
+
```bash
./apache-tomcat-11.0.9/bin/startup.sh
```
-The output from starting the server should look like:
+You should see output like:
```output
Using CATALINA_BASE: /home/ubuntu/apache-tomcat-11.0.9
@@ -56,42 +71,58 @@ Using CATALINA_OPTS:
Tomcat started.
```
-4. If you can access the page at "http://${tomcat_ip}:8080/examples" via a browser, you can proceed to the next benchmarking step.
+## Confirm server access
+
+In your browser, open: `http://${tomcat_ip}:8080/examples`.
+
+You should see the Tomcat welcome page and examples, as shown below:
+
+
-
+
-
+{{% notice Note %}}Make sure port 8080 is open in the security group of the IP address for your Arm-based Linux machine.{{% /notice%}}
-Make sure port 8080 is open in the security group of the IP address for your Arm-based Linux machine.
+## Set up the benchmarking client using wrk2
+[Wrk2](https://github.com/giltene/wrk2) is a high-performance HTTP benchmarking tool specialized in generating constant throughput loads and measuring latency percentiles for web services. `wrk2` is an enhanced version of `wrk` that provides accurate latency statistics under controlled request rates, ideal for performance testing of HTTP servers.
-## Setup Benchmark Client - [wrk2](https://github.com/giltene/wrk2)
-`wrk2` is a high-performance HTTP benchmarking tool specialized in generating constant throughput loads and measuring latency percentiles for web services. `wrk2` is an enhanced version of `wrk` that provides accurate latency statistics under controlled request rates, ideal for performance testing of HTTP servers.
+{{% notice Note %}}
+Currently `wrk2` is only supported on x86 machines. Run the benchmark client steps below on an `x86_64` server running Ubuntu.
+{{%/notice%}}
-Currently `wrk2` is only supported on x86 machines. You will run the Benchmark Client steps shown below on an x86_64 server running Ubuntu.
+## Install dependencies
+Install the required packages:
-1. To use `wrk2`, you will need to install some essential tools before you can build it:
```bash
sudo apt-get update
sudo apt-get install -y build-essential libssl-dev git zlib1g-dev
```
-2. Now you can clone and build it from source:
+## Clone and build wrk2
+
+Clone the repository and compile the tool:
+
```bash
sudo git clone https://github.com/giltene/wrk2.git
cd wrk2
sudo make
```
-Move the executable to somewhere in your PATH:
+
+Move the binary to a directory in your system’s PATH:
+
```bash
sudo cp wrk /usr/local/bin
```
-3. Finally, you can run the benchmark of Tomcat through wrk2.
+## Run the benchmark
+
+Use the following command to benchmark the HelloWorld servlet running on Tomcat:
+
```bash
wrk -c32 -t16 -R50000 -d60 http://${tomcat_ip}:8080/examples/servlets/servlet/HelloWorldExample
```
-Shown below is the output of wrk2:
+You should see output similar to:
```console
Running 1m test @ http://172.26.203.139:8080/examples/servlets/servlet/HelloWorldExample
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/2_async-profiler.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/2_async-profiler.md
index 5346d45fac..cd1f236620 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/2_async-profiler.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/2_async-profiler.md
@@ -1,32 +1,54 @@
---
-title: Java FlameGraph - Async-profiler
+title: Generate Java flame graphs using async-profiler
weight: 3
### FIXED, DO NOT MODIFY
layout: learningpathall
---
-## Java Flame Graph Generation using [async-profiler](https://github.com/async-profiler/async-profiler)
-`async-profiler` is a low-overhead sampling profiler for JVM applications, capable of capturing CPU, allocation, and lock events to generate actionable performance insights.
-A lightweight tool for Java performance analysis, `async-profiler` produces flame graphs and detailed stack traces with minimal runtime impact, suitable for production environments. In this section, you will learn how to install and use it to profile your Tomcat instance being benchmarked.
+## Overview
+
+[Async-profiler](https://github.com/async-profiler/async-profiler) is a low-overhead sampling profiler for JVM applications. It can capture CPU usage, memory allocations, and lock events to generate flame graphs and detailed stack traces.
+
+
+This tool is well-suited for production environments due to its minimal runtime impact. In this section, you'll install and run `async-profiler` to analyze performance on your Tomcat instance under benchmark load.
+
+{{%notice Note%}}
+Install and run `async-profiler` on the same Arm-based Linux machine where Tomcat is running to ensure accurate profiling.
+{{%/notice%}}
+
+## Install async-profiler
+
+Download and extract the latest release:
-You should deploy `async-profiler` on the same Arm Linux machine where Tomcat is running to ensure accurate performance profiling.
-1. Download async-profiler-4.0 and uncompress
```bash
wget -c https://github.com/async-profiler/async-profiler/releases/download/v4.0/async-profiler-4.0-linux-arm64.tar.gz
tar xzf async-profiler-4.0-linux-arm64.tar.gz
```
-2. Run async-profiler to profile the Tomcat instance under benchmarking
+## Run the profiler
+
+Navigate to the profiler binary directory:
+
```bash
cd async-profiler-4.0-linux-arm64/bin
-./asprof -d 10 -f profile.html $(jps | awk /Bootstrap/'{print $1}')
```
-You can also run:
+Run async-profiler against the Tomcat process:
+
+```bash
+./asprof -d 10 -f profile.html $(jps | awk /Bootstrap/'{print $1}')
```
+Alternatively, if you already know the process ID (PID):
+
+```bash
./asprof -d 10 -f profile.html ${tomcat_process_id}
```
+* `-d 10` sets the profiling duration to 10 seconds
+
+* `-f profile.html` specifies the output file
+
+## View the flame graph
-3. Now launch `profile.html` in a browser to analyse your profiling result
+Open the generated `profile.html` file in a browser to view your Java flame graph:
-
+
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
index 96ff1ea117..c2de6d84a4 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
@@ -1,5 +1,5 @@
---
-title: Java FlameGraph - Java Agent
+title: Generate Java flame graphs using a Java agent
weight: 4
@@ -7,42 +7,66 @@ weight: 4
layout: learningpathall
---
-## Java Flame Graph Generation using Java agent and perf
-To profile a Java application with perf and ensure proper symbol resolution, you must include `libperf-jvmti.so` when launching the Java application.
-- `libperf-jvmti.so` is a JVM TI agent library enabling perf to resolve Java symbols, facilitating accurate profiling of Java applications.
-- A specialized shared library, `libperf-jvmti.so` bridges perf and the JVM, enabling proper translation of memory addresses to Java method names during profiling.
+## Overview
+
+You can profile a Java application using `perf` by including a Java agent that enables symbol resolution. This allows `perf` to capture meaningful method names instead of raw memory addresses.
+
+The required library is `libperf-jvmti.so`, a JVM Tool Interface (JVMTI) agent that bridges `perf` and the JVM. It ensures that stack traces collected during profiling can be accurately resolved to Java methods.
+
+In this section, you'll configure Tomcat to use this Java agent and generate a flame graph using the FlameGraph toolkit.
+
+## Locate the Java agent
+
+Locate the `libperf-jvmti.so` library:
-1. Find where `libperf-jvmti.so` is installed on your Arm-based Linux server:
```bash
pushd /usr/lib
find . -name libperf-jvmti.so`
```
-The output will show the path of the library that you will then include in your Tomcat setup file:
+The output will show the path to the shared object file:
+
+## Modify Tomcat configuration
+
+Open the Tomcat launch script:
+
```bash
vi apache-tomcat-11.0.9/bin/catalina.sh
```
-Add JAVA_OPTS="$JAVA_OPTS -agentpath:/usr/lib/linux-tools-6.8.0-63/libperf-jvmti.so -XX:+PreserveFramePointer" to `catalina.sh`. Make sure the path matches the location on your machine from the previous step.
+Add the following line (replace the path if different on your system):
+```bash
+JAVA_OPTS="$JAVA_OPTS -agentpath:/usr/lib/linux-tools-6.8.0-63/libperf-jvmti.so -XX:+PreserveFramePointer"
+```
Now shutdown and restart Tomcat:
+
```bash
cd apache-tomcat-11.0.9/bin
./shutdown.sh
./startup.sh
```
-2. Use perf to profile Tomcat, and restart wrk that running on your x86 instance if necessary:
+## Run perf to record profiling data
+
+Run the following command to record a 10-second profile of the Tomcat process:
+
```bash
sudo perf record -g -k1 -p $(jps | awk /Bootstrap/'{print $1}') -- sleep 10
```
-This command will record the collected data in a file named `perf.data`
+This generates a file named `perf.data`.
+
+If needed, restart `wrk` on your x86 client to generate load during profiling.
+
+## Generate a flame graph
+
+Clone the FlameGraph repository and add it to your PATH:
-3. Convert the collected `perf.data` into a Java flame graph using FlameGraph
```bash
git clone https://github.com/brendangregg/FlameGraph.git
export PATH=$PATH:`pwd`/FlameGraph
sudo perf inject -j -i perf.data | perf script | stackcollapse-perf.pl | flamegraph.pl &> profile.svg
```
+## View the result
-4. You can now successfully launch `profile.svg` in a browser to analyse the profiling result
+You can now launch `profile.svg` in a browser to analyse the profiling result:
-
+
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
index 06a3c9281c..b675188fa3 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
@@ -1,57 +1,49 @@
---
-title: Analyze Java Performance on Arm servers using FlameGraphs
-
-draft: true
-cascade:
- draft: true
-
+title: Analyze Java performance on Arm servers using flame graphs
minutes_to_complete: 30
-who_is_this_for: This is an introductory topic for software developers looking to analyze the performance of their Java applications on the Arm Neoverse based servers using flame graphs.
+who_is_this_for: This is an introductory topic for developers who want to analyze the performance of Java applications on Arm Neoverse-based servers using flame graphs.
learning_objectives:
- - How to set up tomcat benchmark environment
- - How to generate flame graphs for Java applications using async-profiler
- - How to generate flame graphs for Java applications using Java agent
+ - Set up a benchmarking environment using Tomcat and wrk2
+ - Generate flame graphs using async-profiler
+ - Generate flame graphs using a Java agent
prerequisites:
- - An Arm-based and x86 computer running Ubuntu. You can use a server instance from a cloud service provider of your choice.
- - Basic familiarity with Java applications and flame graphs
+ - Access to both Arm-based and x86-based computers running Ubuntu (you can use cloud-based server instances)
+ - Basic familiarity with Java applications and performance profiling using flame graphs
-author: Ying Yu, Martin Ma
+author:
+ - Ying Yu
+ - Martin Ma
-### Tags
+# Tags
skilllevels: Introductory
subjects: Performance and Architecture
armips:
- - Neoverse
-
+ - Neoverse
+
tools_software_languages:
- - OpenJDK-21
- - Tomcat
- - Async-profiler
- - FlameGraph
- - wrk2
-operatingsystems:
- - Linux
+ - OpenJDK-21
+ - Tomcat
+ - async-profiler
+ - FlameGraph
+ - wrk2
+operatingsystems:
+ - Linux
further_reading:
- - resource:
- title: OpenJDK Wiki
- link: https://wiki.openjdk.org/
- type: documentation
- - resource:
- title: Java FlameGraphs
- link: https://www.brendangregg.com/flamegraphs.html
- type: website
-
-
-
-
-### FIXED, DO NOT MODIFY
-# ================================================================================
-weight: 1 # _index.md always has weight of 1 to order correctly
-layout: "learningpathall" # All files under learning paths have this same wrapper
-learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
+ - resource:
+ title: OpenJDK Wiki
+ link: https://wiki.openjdk.org/
+ type: documentation
+ - resource:
+ title: Java FlameGraphs
+ link: https://www.brendangregg.com/flamegraphs.html
+ type: website
+
+weight: 1
+layout: "learningpathall"
+learning_path_main_page: "yes"
---
diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md
index d8467f5081..f077d98dac 100644
--- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md
+++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md
@@ -22,7 +22,7 @@ wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastru
Unpack the tarball and run the install script:
```bash
-tar -xf FVP_RD_N2_11.24_12_Linux64.tgz
+tar -xf FVP_RD_N2_11.25_23_Linux64.tgz
./FVP_RD_N2.sh --i-agree-to-the-contained-eula --no-interactive
```
diff --git a/data/stats_current_test_info.yml b/data/stats_current_test_info.yml
index 5e020828c9..95124a0643 100644
--- a/data/stats_current_test_info.yml
+++ b/data/stats_current_test_info.yml
@@ -1,5 +1,5 @@
summary:
- content_total: 391
+ content_total: 393
content_with_all_tests_passing: 0
content_with_tests_enabled: 61
sw_categories:
@@ -196,4 +196,3 @@ sw_categories:
zlib:
readable_title: Learn how to build and use Cloudflare zlib on Arm servers
tests_and_status: []
-
diff --git a/data/stats_weekly_data.yml b/data/stats_weekly_data.yml
index 12463a4ab0..f396902423 100644
--- a/data/stats_weekly_data.yml
+++ b/data/stats_weekly_data.yml
@@ -7011,4 +7011,123 @@
issues:
avg_close_time_hrs: 0
num_issues: 21
- percent_closed_vs_total: 0.0
\ No newline at end of file
+ percent_closed_vs_total: 0.0
+- a_date: '2025-08-04'
+ content:
+ automotive: 3
+ cross-platform: 34
+ embedded-and-microcontrollers: 43
+ install-guides: 105
+ iot: 6
+ laptops-and-desktops: 38
+ mobile-graphics-and-gaming: 35
+ servers-and-cloud-computing: 129
+ total: 393
+ contributions:
+ external: 98
+ internal: 519
+ github_engagement:
+ num_forks: 30
+ num_prs: 18
+ individual_authors:
+ adnan-alsinan: 2
+ alaaeddine-chakroun: 2
+ albin-bernhardsson: 1
+ albin-bernhardsson,-julie-gaskin: 1
+ alex-su: 1
+ alexandros-lamprineas: 1
+ andrew-choi: 2
+ andrew-kilroy: 1
+ annie-tallund: 4
+ arm: 3
+ arnaud-de-grandmaison: 5
+ aude-vuilliomenet: 1
+ avin-zarlez: 1
+ barbara-corriero: 1
+ basma-el-gaabouri: 1
+ ben-clark: 1
+ bolt-liu: 2
+ brenda-strech: 1
+ bright-edudzi-gershon-kordorwu: 1
+ chaodong-gong: 1
+ chen-zhang: 1
+ chenying-kuo: 1
+ christophe-favergeon: 1
+ christopher-seidl: 7
+ cyril-rohr: 1
+ daniel-gubay: 1
+ daniel-nguyen: 2
+ david-spickett: 2
+ dawid-borycki: 33
+ diego-russo: 2
+ dominica-abena-o.-amanfo: 1
+ elham-harirpoush: 2
+ florent-lebeau: 5
+ "fr\xE9d\xE9ric--lefred--descamps": 2
+ gabriel-peterson: 5
+ gayathri-narayana-yegna-narayanan: 2
+ georgios-mermigkis: 1
+ geremy-cohen: 3
+ gian-marco-iodice: 1
+ graham-woodward: 1
+ han-yin: 1
+ iago-calvo-lista: 1
+ james-whitaker: 1
+ jason-andrews: 105
+ jeff-young: 1
+ joana-cruz: 1
+ joe-stech: 6
+ johanna-skinnider: 2
+ jonathan-davies: 2
+ jose-emilio-munoz-lopez: 1
+ julie-gaskin: 5
+ julien-jayat: 1
+ julien-simon: 1
+ julio-suarez: 6
+ jun-he: 1
+ kasper-mecklenburg: 1
+ kieran-hejmadi: 12
+ koki-mitsunami: 2
+ konstantinos-margaritis: 8
+ kristof-beyls: 1
+ leandro-nunes: 1
+ liliya-wu: 1
+ mark-thurman: 1
+ masoud-koleini: 1
+ mathias-brossard: 1
+ michael-hall: 5
+ na-li: 1
+ nader-zouaoui: 2
+ nikhil-gupta: 1
+ nina-drozd: 1
+ nobel-chowdary-mandepudi: 6
+ odin-shen: 9
+ owen-wu: 2
+ pareena-verma: 46
+ paul-howard: 3
+ peter-harris: 1
+ pranay-bakre: 5
+ preema-merlin-dsouza: 1
+ przemyslaw-wirkus: 2
+ qixiang-xu: 1
+ rani-chowdary-mandepudi: 1
+ rin-dobrescu: 1
+ roberto-lopez-mendez: 2
+ ronan-synnott: 45
+ shuheng-deng: 1
+ thirdai: 1
+ tianyu-li: 2
+ tom-pilar: 1
+ uma-ramalingam: 1
+ varun-chari: 2
+ visualsilicon: 1
+ willen-yang: 1
+ william-liang: 1
+ ying-yu: 2
+ yiyang-fan: 1
+ zach-lasiuk: 2
+ zhengjun-xing: 2
+ issues:
+ avg_close_time_hrs: 0
+ num_issues: 26
+ percent_closed_vs_total: 0.0
diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/head/head.html b/themes/arm-design-system-hugo-theme/layouts/partials/head/head.html
index 3b0f7a0b11..f7e96d3007 100644
--- a/themes/arm-design-system-hugo-theme/layouts/partials/head/head.html
+++ b/themes/arm-design-system-hugo-theme/layouts/partials/head/head.html
@@ -26,6 +26,8 @@
{{ $title }}
+{{ partial "head/jsonld.html" . }}
+
diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html b/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html
new file mode 100644
index 0000000000..49f858bb4a
--- /dev/null
+++ b/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html
@@ -0,0 +1,59 @@
+{{/* layouts/partials/head/jsonld.html */}}
+{{/* ---------------------------------------------------------------
+ Render JSON‑LD only for Learning‑Path _index.md main pages
+---------------------------------------------------------------- */}}
+{{- if and .IsSection (eq .Params.learning_path_main_page "yes") -}}
+ {{/* -------- Helper : Build ISO‑8601 duration (PT30M, PT2H, …) */}}
+ {{- $duration := "" -}}
+ {{- with .Params.minutes_to_complete -}}
+ {{- $duration = printf "PT%dM" (int .) -}}
+ {{- end -}}
+ {{/* -------- Learning objectives & prerequisites */}}
+ {{- $objectives := slice -}}
+ {{- with .Params.learning_objectives -}}
+ {{- range . }}{{ $objectives = $objectives | append ( . | plainify ) }}{{ end -}}
+ {{- end -}}
+ {{- $prereqs := slice -}}
+ {{- with .Params.prerequisites -}}
+ {{- range . }}{{ $prereqs = $prereqs | append ( . | plainify ) }}{{ end -}}
+ {{- end -}}
+ {{/* -------- Collect tag‑style params into one keywords list */}}
+ {{- $keywords := slice -}}
+ {{- $tagParams := slice
+ "skilllevels"
+ "cloud_service_providers"
+ "armips"
+ "subjects"
+ "operatingsystems"
+ "tools_software_languages"
+ -}}
+ {{- range $tagParams -}}
+ {{- $v := index $.Params . -}}
+ {{- with $v -}}
+ {{- if reflect.IsSlice $v -}}
+ {{- range $v }}{{ $keywords = $keywords | append ( . | plainify ) }}{{ end -}}
+ {{- else -}}
+ {{- $keywords = $keywords | append ( $v | plainify ) -}}
+ {{- end -}}
+ {{- end -}}
+ {{- end -}}
+ {{/* -------- Assemble JSON‑LD dict */}}
+ {{- $j := dict
+ "@context" "https://schema.org"
+ "@type" "Course"
+ "name" .Title
+ -}}
+ {{- with .Params.who_is_this_for }}{{ $j = merge $j (dict "description" ( . | plainify )) }}{{ end -}}
+ {{- if $duration }}{{ $j = merge $j (dict "timeRequired" $duration) }}{{ end -}}
+ {{- with .Params.skilllevels }}{{ $j = merge $j (dict "educationalLevel" .) }}{{ end -}}
+ {{- with $objectives }}{{ if gt (len .) 0 }}{{ $j = merge $j (dict "teaches" .) }}{{ end }}{{ end -}}
+ {{- with $prereqs }}{{ if gt (len .) 0 }}{{ $j = merge $j (dict "competencyRequired" .) }}{{ end }}{{ end -}}
+ {{- with .Params.author }}{{ $j = merge $j (dict "author" (dict "@type" "Person" "name" .)) }}{{ end -}}
+ {{- if $keywords }}{{ $j = merge $j (dict "keywords" (delimit (uniq $keywords) ", ")) }}{{ end -}}
+ {{- with .Site.Title }}{{ $j = merge $j (dict "provider" (dict "@type" "Organization" "name" .)) }}{{ end -}}
+ {{/* -------- Emit into */}}
+
+{{- end -}}
\ No newline at end of file