Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Introducing the Arm RD‑V3 Platform
title: Learn about the Arm RD‑V3 Platform
weight: 2

### FIXED, DO NOT MODIFY
Expand All @@ -8,7 +8,7 @@ layout: learningpathall

## Introduction to the Arm RD‑V3 Platform

This module introduces the Arm [Neoverse CSSV3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) architecture and the RD‑V3 [Reference Design Platform Software](https://neoverse-reference-design.docs.arm.com/en/latest/index.html) that implements it. You'll learn how these components enable scalable, server-class system design, and how to simulate and validate the full firmware stack using Fixed Virtual Platforms (FVP)well before hardware is available.
In this section, you will learn about the Arm [Neoverse CSS V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) subsystem and the RD‑V3 [Reference Design Platform Software](https://neoverse-reference-design.docs.arm.com/en/latest/index.html) that implements it. You'll learn how these components enable scalable, server-class system design, and how to simulate and validate the full firmware stack using Fixed Virtual Platforms (FVP), well before hardware is available.

Arm Neoverse is designed to meet the demanding requirements of data center and edge computing, delivering high performance and efficiency. Widely adopted in servers, networking, and edge devices, the Neoverse architecture provides a solid foundation for modern infrastructure.

Expand Down Expand Up @@ -56,9 +56,9 @@ Here is the Neoverse Reference Design Platform [Software Stack](https://neoverse

### Develop and Validate Without Hardware

In traditional development workflows, system validation cannot begin until silicon is availableoften introducing risk and delay.
In traditional development workflows, system validation cannot begin until silicon is available, often introducing risk and delay.

To address this, Arm provides the Fixed Virtual Platform ([FVP](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms)) —a complete simulations model that emulates full Arm SoC behavior on a host machine. The CSS‑V3 platform is available in multiple FVP configurations, allowing developers to select the model that best fits their specific development and validation needs.
To address this, Arm provides Fixed Virtual Platforms ([FVP](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms)), complete simulations model that emulates Arm SoC behavior on a host machine. The CSS‑V3 platform is available in multiple FVP configurations, allowing developers to select the model that best fits their specific development and validation needs.


Key Capabilities of FVP:
Expand All @@ -67,11 +67,11 @@ Key Capabilities of FVP:
* Compatible with TF‑A, UEFI, GRUB, and Linux kernel images
* Provides boot logs, trace outputs, and interrupt event visibility for debugging

FVP enables developers to verify boot sequences, debug firmware handoffs, and even simulate RSE behaviors—all before first silicon.
FVP enables developers to verify boot sequences, debug firmware handoffs, and even simulate RSE (Runtime Security Engine) behaviors, all pre-silicon.

### Comparing different version of RD-V3 FVP

To support different use cases and levels of platform complexity, Arm offers three virtual models based on the CSS‑v3 architecture: RD‑V3, RD-V3-Cfg1, and RD‑V3‑R1. While they share a common foundation, they differ in chip count, system topology, and simulation flexibility.
To support different use cases and levels of platform complexity, Arm offers three virtual models based on the CSS V3 architecture: RD‑V3, RD-V3-Cfg1, and RD‑V3‑R1. While they share a common foundation, they differ in chip count, system topology, and simulation flexibility.

| Model | Description | Recommended Use Cases |
|-------------|------------------------------------------------------------------|--------------------------------------------------------------------|
Expand All @@ -81,5 +81,5 @@ To support different use cases and levels of platform complexity, Arm offers thr
| CFG2 | Quad-chip platform with 4×32-core Poseidon-V CPUs connected via CCG links | Designed for advanced multi-chip validation, CML-based coherence, and high-performance platform scaling |


This Learning Path begins with RD‑V3 as the primary platform for foundational exercises, guiding you through the process of building the software stack and simulating it on FVP to verify the boot sequence.
In this Learning Path you will use RD‑V3 as the primary platform for foundational exercises, guiding you through the process of building the software stack and simulating it on an FVP to verify the boot sequence.
In later modules, you’ll transition to RD‑V3‑R1 to more advanced system simulation, multi-node bring-up, and firmware coordination across components like MCP and SCP.
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@ layout: learningpathall

## Firmware Stack Overview and Boot Sequence Coordination

To ensure the platform transitions securely and reliably from power-on to operating system launch, this module introduces the roles and interactions of each firmware component within the RD‑V3 boot process.
You’ll learn how each module contributes to system initialization and how control is systematically handed off across the boot chain.
To ensure the platform transitions securely and reliably from power-on to operating system launch, this section introduces the roles and interactions of each firmware component within the RD‑V3 boot process.
You’ll learn how each component contributes to system initialization and how control is systematically handed off across the boot chain.


## How the System Wakes Up
## How the System Boots Up

In the RD‑V3 platform, each subsystem—such as TF‑A, RSE, SCP, LCP, and UEFI—operates independently but cooperates through a well-defined sequence.
Each module is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling.
In the RD‑V3 platform, each firmware component—such as TF‑A, RSE, SCP, LCP, and UEFI—operates independently but functions together through a well-defined sequence.
Each component is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling.

The following diagram from the [Neoverse Reference Design Documentation](https://neoverse-reference-design.docs.arm.com/en/latest/shared/boot_flow/rdv3_single_chip.html?highlight=boot) illustrates the progression of component activation from initial reset to OS handoff:

Expand Down Expand Up @@ -149,12 +149,12 @@ GRUB then selects and boots the Linux kernel. Unlike the previous dependency arr
This layered approach supports modular testing, independent debugging, and early-stage simulation—all essential for secure and robust platform bring-up.


In this module, you have:
In this section, you have:

* Explored the full boot sequence of the RD‑V3 platform, from power-on to Linux login
* Understood the responsibilities of key firmware components such as TF‑A, RSE, SCP, LCP, and UEFI
* Learned how secure boot is enforced and how each module hands off control to the next
* Interpreted boot dependencies using FVP simulation and UART logs

With the full boot flow and firmware responsibilities now clear, you're ready to apply these insights.
In the next module, you'll fetch the RD‑V3 codebase, configure your workspace, and begin building your own firmware stack for simulation.
With an understanding of full boot sequence and firmware responsibilities, you're ready to apply these insights.
In the next section, you'll fetch the RD‑V3 codebase and start building the firmware stack for simulation.
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
---
title: Build the RD‑V3 Reference Platform
title: Build the RD‑V3 Reference Platform Software Stack
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---
## Building the RD‑V3 Reference Platform
## Building the RD‑V3 Reference Platform Software Stack

In this module, you’ll set up your development environment on any Arm-based server and build the firmware stack required to simulate the RD‑V3 platform. This Learning Path was tested on an AWS `m7g.4xlarge` Arm-based instance running Ubuntu 22.04

In this module, you’ll set up your development environment on Arm server and build the firmware stack required to simulate the RD‑V3 platform.

### Step 1: Prepare the Development Environment

Expand Down Expand Up @@ -48,15 +49,15 @@ For this session, we will use `pinned-rdv3.xml` and `RD-INFRA-2025.07.03`.
cd ~
mkdir rdv3
cd rdv3
# Initialize the source tree
```
Initialize and sync the source code tree:
```bash
repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m pinned-rdv3.xml -b refs/tags/RD-INFRA-2025.07.03 --depth=1

# Sync the full source code
repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle --retry-fetches=5
```

Once synced, you will see the message like:
```
Once synced, the output should look like:
```output
Syncing: 95% (19/20), done in 2m36.453s
Syncing: 100% (83/83) 2:52 | 1 job | 0:01 platsw/edk2-platforms @ uefi/edk2/edk2-platformsrepo sync has finished successfully.
```
Expand All @@ -81,7 +82,7 @@ There are two supported methods for building the reference firmware stack: **hos
- The **host-based** build installs all required dependencies directly on your local system and executes the build natively.
- The **container-based** build runs the compilation process inside a pre-configured Docker image, ensuring consistent results and isolation from host environment issues.

In this Learning Path, we will use the **container-based** approach.
In this Learning Path, you will use the **container-based** approach.

The container image is designed to use the source directory from the host (`~/rdv3`) and perform the build process inside the container. Make sure Docker is installed on your Linux machine. You can follow this [installation guide](https://learn.arm.com/install-guides/docker/).

Expand All @@ -103,9 +104,9 @@ To build the container image:
./container.sh build
```

The build procedure may take a few minutes, depending on network bandwidth and CPU performance. On my AWS m7g.4xlarge instance, it took 250 seconds.
The build procedure may take a few minutes, depending on network bandwidth and CPU performance. This Learning Path was tested on an AWS `m7g.4xlarge` instance, and the build took 250 seconds. The output from the build looks like:

```
```output
Building docker image: rdinfra-builder ...
[+] Building 239.7s (19/19) FINISHED docker:default
=> [internal] load build definition from rd-infra-arm64 0.0s
Expand Down Expand Up @@ -141,24 +142,29 @@ Building docker image: rdinfra-builder ...
=> => naming to docker.io/library/rdinfra-builder 0.0s
```

After the docker image build completes successfully, you can use `docker images` to find the build docker image called `rdinfra-builder`.

Verify the docker image build completed successfully:

```bash
docker images
```

You should see a docker image called `rdinfra-builder`:

```output
REPOSITORY TAG IMAGE ID CREATED SIZE
rdinfra-builder latest 3a395c5a0b60 4 minutes ago 8.12GB
```

To quickly test the Docker image you just built, run the following command to enter it interactively:
To quickly test the Docker image you just built, run the following command to enter the docker container interactively:

```bash
cd ~/rdv3/container-scripts
./container.sh -v ~/rdv3 run
```

This script mounts your source directory (~/rdv3) into the container and opens a shell session at that location.
Inside the container, you should see a prompt like this:

```
```output
Running docker image: rdinfra-builder ...
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
Expand All @@ -169,7 +175,7 @@ your-username:hostname:/home/your-username/rdv3$
You can explore the container environment if you wish, then type exit to return to the host system.


### Step 4: Enter the Container and Build Firmware
### Step 4: Build Firmware

Building the full firmware stack involves compiling several components and preparing them for simulation. Rather than running each step manually, you can use a single Docker command to automate the build and package phases.

Expand All @@ -193,13 +199,16 @@ docker run --rm \
./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 package"
```

The build artifacts will be placed under `~/rdv3/output/rdv3/rdv3/`, where the last `rdv3` corresponds to the selected platform name.
The build artifacts will be placed under `~/rdv3/output/rdv3/rdv3/`, where the last `rdv3` in the directory path corresponds to the selected platform name.

After a successful build, the following output artifacts will be generated under `~/rdv3/output/rdv3/rdv3/`
After a successful build, inspect the artifacts generated under `~/rdv3/output/rdv3/rdv3/`

```
```bash
ls ~/rdv3/output/rdv3/rdv3 -al
```

The directory contents should look like:
```output
total 7092
drwxr-xr-x 2 ubuntu ubuntu 4096 Aug 12 13:15 .
drwxr-xr-x 4 ubuntu ubuntu 4096 Aug 12 13:15 ..
Expand All @@ -220,6 +229,7 @@ lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 12 13:15 tf_m_vm0_0.bin -> ../components/
lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 12 13:15 tf_m_vm1_0.bin -> ../components/arm/rse/neoverse_rd/rdv3/vm1_0.bin
lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 12 13:15 uefi.bin -> ../components/css-common/uefi.bin
```
Here's a reference of what each file refers to:

| Component | Output Files | Description |
|----------------------|----------------------------------------------|-----------------------------|
Expand All @@ -234,7 +244,7 @@ lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 12 13:15 uefi.bin -> ../components/css-co

You can also perform the build manually after entering the container:

In the container shell:
Start your docker container. In your running container shell:
```bash
cd ~/rdv3
./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 build
Expand All @@ -244,4 +254,4 @@ cd ~/rdv3
This manual workflow is useful for debugging, partial builds, or making custom modifications to individual components.


You’ve now successfully prepared and built the full RD‑V3 firmware stack. In the next module, you’ll install the matching FVP model and simulate the full boot sequencebringing the firmware to life in a virtual platform.
You’ve now successfully prepared and built the full RD‑V3 firmware stack. In the next section, you’ll install the appropriate FVP and simulate the full boot sequence, bringing the firmware to life on a virtual platform.
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ weight: 5
layout: learningpathall
---

## Simulating RD‑V3 with Arm FVP
## Simulating RD‑V3 with an Arm FVP

In the previous module, you built the complete CSS‑V3 firmware stack.
Now, you’ll use Arm Fixed Virtual Platform (FVP) to simulate the systemallowing you to verify the boot sequence without any physical silicon.
In the previous section, you built the complete CSS‑V3 firmware stack.
Now, you’ll use Arm Fixed Virtual Platform (FVP) to simulate the system, allowing you to verify the boot sequence without any physical silicon.
This simulation brings up the full stack from BL1 to Linux shell using Buildroot.

### Step 1: Download and Install the FVP Model
Expand All @@ -20,7 +20,7 @@ For example, the **RD‑INFRA‑2025.07.03** release tag is designed to work wit

You can refer to the [RD-V3 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) for a full list of release tags, corresponding FVP versions, and their associated release notes, which summarize changes and validated test cases.

Download the matching FVP binary for your selected release tag using the link provided in this course:
Download the matching FVP binary for your selected release tag using the link provided:

```bash
mkdir -p ~/fvp
Expand All @@ -31,17 +31,17 @@ tar -xvf FVP_RD_V3_11.29_35_Linux64_armv8l.tgz
./FVP_RD_V3.sh
```

The FVP installation may prompt you with a few questionschoosing the default options is sufficient for this learning path. By default, the FVP will be installed in `/home/ubuntu/FVP_RD_V3`.
The FVP installation may prompt you with a few questions,choosing the default options is sufficient for this learning path. By default, the FVP will be installed in `/home/ubuntu/FVP_RD_V3`.

### Step 2: Remote Desktop Set Up

The RD‑V3 FVP model launches multiple UART consoles—each mapped to a separate terminal window for different subsystems (e.g., Neoverse V3, Cortex‑M55, Cortex‑M7, panel).

If you’re accessing the platform over SSH, these UART consoles can still be displayed, but network latency and graphical forwarding can severely degrade performance.

To interact with different UARTs more efficiently, we recommend installing a remote desktop environment using `XRDP`. This provides a smoother user experience when dealing with multiple terminal windows and system interactions.
To interact with different UARTs more efficiently, it is recommend to install a remote desktop environment using `XRDP`. This provides a smoother user experience when dealing with multiple terminal windows and system interactions.

In AWS Ubuntu 22.04 instance, you need install required packages:
You will need to install the required packages:


```bash
Expand All @@ -50,7 +50,7 @@ sudo apt install -y ubuntu-desktop xrdp xfce4 xfce4-goodies pv xterm sshpass soc
sudo systemctl enable --now xrdp
```

To allow remote desktop connections, you need to open port 3389 (RDP) in your EC2 security group:
To allow remote desktop connections, you need to open port 3389 (RDP) in your AWS EC2 security group:
- Go to the EC2 Dashboard → Security Groups
- Select the security group associated with your instance
- Under the Inbound rules tab, click Edit inbound rules
Expand All @@ -70,13 +70,13 @@ To enable XRDP remote sessions, you need to switch to Xorg by modifying the GDM
Open the `/etc/gdm3/custom.conf` in a text editor.
Find the line:

```
```output
#WaylandEnable=false
```

Uncomment it by removing the # so it becomes:

```
```output
WaylandEnable=false
```

Expand Down
Loading