From 527e66664772f726429534eebf7288798c183d66 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 9 Sep 2025 12:20:41 +0000 Subject: [PATCH 1/7] Content development --- .../1_introduction_rdv3.md | 87 +++++------ .../neoverse-rdv3-swstack/2_rdv3_bootseq.md | 138 ++++++++---------- .../neoverse-rdv3-swstack/_index.md | 16 +- 3 files changed, 104 insertions(+), 137 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md index c36adf0b1a..73da1b429d 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md @@ -1,85 +1,78 @@ --- -title: Learn about the Arm RD‑V3 Platform +title: Learn about the Arm RD-V3 Platform weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Introduction to the Arm RD‑V3 Platform +## Introduction to the Arm RD-V3 Platform -In this section, you will learn about the Arm [Neoverse CSS V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) subsystem and the RD‑V3 [Reference Design Platform Software](https://neoverse-reference-design.docs.arm.com/en/latest/index.html) that implements it. You'll learn how these components enable scalable, server-class system design, and how to simulate and validate the full firmware stack using Fixed Virtual Platforms (FVP), well before hardware is available. +In this section, you will learn about the Arm [Neoverse CSS-V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) subsystem and the RD-V3 [Reference Design Platform Software](https://neoverse-reference-design.docs.arm.com/en/latest/index.html) that implements it. You’ll learn how these components enable scalable, server-class system design, and how to simulate and validate the full firmware stack using Fixed Virtual Platforms (FVPs) before hardware is available. -Arm Neoverse is designed to meet the demanding requirements of data center and edge computing, delivering high performance and efficiency. Widely adopted in servers, networking, and edge devices, the Neoverse architecture provides a solid foundation for modern infrastructure. +Arm Neoverse is designed for the demanding requirements of data-center and edge computing, delivering high performance and efficiency. Widely adopted in servers, networking, and edge devices, the Neoverse architecture provides a solid foundation for modern infrastructure. Using Arm Fixed Virtual Platforms (FVPs), you can explore system bring-up, boot flow, and firmware customization well before physical silicon becomes available. This module also introduces the key components involved, from Neoverse V3 cores to secure subsystem controllers, and shows how these elements work together in a fully virtualized system simulation. -### Neoverse CSS-V3 Platform Overview +### Neoverse CSS-V3 platform overview -[Neoverse CSS-V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) (Compute Subsystem Version 3) is the core subsystem architecture underpinning the Arm RD-V3 platform. It is specifically optimized for high-performance server and data center applications, providing a highly integrated solution combining processing cores, memory management, and interconnect technology. +[Neoverse CSS-V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) (Compute Subsystem Version 3) is the core subsystem architecture underpinning the Arm RD-V3 platform. It is optimized for high-performance server and data-center applications, providing an integrated solution that combines processing cores, memory management, and interconnect technology. -CSS V3 forms the key building block for specialized computing systems. It reduces design and validation costs for the general-purpose compute subsystem, allowing partners to focus on their specialization and acceleration while reducing risk and accelerating time to deployment. +CSS-V3 forms the key building block for specialized computing systems. It reduces design and validation costs for the general-purpose compute subsystem, allowing partners to focus on specialization and acceleration while reducing risk and time to deployment. -CSS‑V3 is available in configurable subsystems, supporting up to 64 Neoverse V3 cores per die. It also enables integration of high-bandwidth DDR5/LPDDR5 memory (up to 12 channels), PCIe Gen5 or CXL I/O (up to 64 lanes), and high-speed die-to-die links with support for UCIe 1.1 or custom PHYs. Designs can be scaled down to smaller core-count configurations, such as 32-core SoCs, or expanded through multi-die integration. +CSS-V3 is available in configurable subsystems, supporting up to 64 Neoverse V3 cores per die. It also enables integration of high-bandwidth DDR5/LPDDR5 memory (up to 12 channels), PCIe Gen5 or CXL I/O (up to 64 lanes), and high-speed die-to-die links with support for UCIe 1.1 or custom PHYs. Designs can scale down to smaller core-count configurations, such as 32-core SoCs, or expand through multi-die integration. Key features of CSS-V3 include: -* High-performance CPU clusters: Optimized for server workloads and data throughput. +- High-performance CPU clusters optimized for server workloads and data throughput +- Advanced memory management for efficient handling across multiple processing cores +- High-speed, low-latency interconnect within the subsystem -* Advanced memory management: Efficient handling of data across multiple processing cores. +The CSS-V3 subsystem is fully supported by Arm’s Fixed Virtual Platforms (FVPs), enabling pre-silicon testing of these capabilities. -* Interconnect technology: Enabling high-speed, low-latency communication within the subsystem. +### RD-V3 platform introduction -The CSS‑V3 subsystem is fully supported by Arm's Fixed Virtual Platform, enabling pre-silicon testing of these capabilities. +The RD-V3 platform is a comprehensive reference design built around Arm’s [Neoverse V3](https://www.arm.com/products/silicon-ip-cpu/neoverse/neoverse-v3) CPUs, along with [Cortex-M55](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55) and [Cortex-M7](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m7) microcontrollers. This platform enables efficient high-performance computing and robust platform management: -### RD‑V3 Platform Introduction +| Component | Description | +|-------------------|--------------------------------------------------------------------------------------------------| +| Neoverse V3 | Primary application processor responsible for executing the OS and payloads | +| Cortex-M7 | Implements the System Control Processor (SCP) for power, clocks, and initialization | +| Cortex-M55 | Hosts the Runtime Security Engine (RSE), providing secure boot and runtime integrity | +| Cortex-M55 (LCP) | Acts as the Local Control Processor, enabling per-core power and reset management for AP cores | -The RD‑V3 platform is a comprehensive reference design built around Arm’s [Neoverse V3](https://www.arm.com/products/silicon-ip-cpu/neoverse/neoverse-v3) CPUs, along with [Cortex-M55](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55) and [Cortex-M7](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m7) microcontrollers. This platform enables efficient high-performance computing and robust platform management: +These subsystems work together in a coordinated architecture, communicating through shared memory regions, control buses, and platform protocols. This enables multi-stage boot processes and robust secure-boot implementations. - -| Component | Description | -|------------------|------------------------------------------------------------------------------------------------| -| Neoverse V3 | The primary application processor responsible for executing OS and payloads | -| Cortex M7 | Implements the System Control Processor (SCP) for power, clocks, and init | -| Cortex M55 | Hosts the Runtime Security Engine (RSE), providing secure boot and runtime integrity | -| Cortex M55 (LCP) | Acts as the Local Control Processor, enabling per-core power and reset management for AP cores | - - -These subsystems work together in a coordinated architecture, communicating through shared memory regions, control buses, and platform protocols. This enables multi-stage boot processes and robust secure boot implementations. - -Here is the Neoverse Reference Design Platform [Software Stack](https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html#sw-stack) for your reference. +Here is the Neoverse Reference Design Platform [software stack](https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html#sw-stack) for reference. ![img1 alt-text#center](rdinfra_sw_stack.jpg "Neoverse Reference Design Software Stack") +## Develop and validate without hardware -### Develop and Validate Without Hardware - -In traditional development workflows, system validation cannot begin until silicon is available, often introducing risk and delay. - -To address this, Arm provides Fixed Virtual Platforms ([FVP](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms)), complete simulations model that emulates Arm SoC behavior on a host machine. The CSS‑V3 platform is available in multiple FVP configurations, allowing developers to select the model that best fits their specific development and validation needs. +In traditional workflows, system validation often cannot begin until silicon is available, introducing risk and delay. +To address this, Arm provides Fixed Virtual Platforms ([FVPs](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms)), a set of simulation models that emulate Arm SoC behavior on a host machine. The CSS-V3 platform is available in multiple FVP configurations, allowing you to select the model that best fits specific development and validation needs. -Key Capabilities of FVP: -* Multi-core CPU simulation with SMP boot -* Multiple UART interfaces for serial debug and monitoring -* Compatible with TF‑A, UEFI, GRUB, and Linux kernel images -* Provides boot logs, trace outputs, and interrupt event visibility for debugging +Key capabilities of FVPs: -FVP enables developers to verify boot sequences, debug firmware handoffs, and even simulate RSE (Runtime Security Engine) behaviors, all pre-silicon. +- Multi-core CPU simulation with SMP boot +- Multiple UART interfaces for serial debug and monitoring +- Compatibility with TF-A, UEFI, GRUB, and Linux kernel images +- Boot logs, trace outputs, and interrupt event visibility for debugging -### Comparing different version of RD-V3 FVP +FVPs enable developers to verify boot sequences, debug firmware handoffs, and even simulate RSE (Runtime Security Engine) behaviors, all pre-silicon. -To support different use cases and levels of platform complexity, Arm offers three virtual models based on the CSS V3 architecture: RD‑V3, RD-V3-Cfg1, and RD‑V3‑R1. While they share a common foundation, they differ in chip count, system topology, and simulation flexibility. +### Comparing RD-V3 FVP variants -| Model | Description | Recommended Use Cases | -|-------------|------------------------------------------------------------------|--------------------------------------------------------------------| -| RD‑V3 | Standard single-die platform with full processor and security blocks | Ideal for newcomers, firmware bring-up, and basic validation | -| RD‑V3‑R1 | Dual-die platform simulating chiplet-based architecture | Suitable for multi-node, interconnect, and advanced boot tests | -| CFG1 | Lightweight model with reduced control complexity for fast startup | Best for CI pipelines, unit testing, and quick validations | -| CFG2 | Quad-chip platform with 4×32-core Poseidon-V CPUs connected via CCG links | Designed for advanced multi-chip validation, CML-based coherence, and high-performance platform scaling | +To support different use cases and levels of platform complexity, Arm offers several virtual models based on the CSS-V3 architecture: RD-V3, RD-V3-R1, RD-V3-Cfg1 (CFG1), and RD-V3-Cfg2 (CFG2). While they share a common foundation, they differ in chip count, system topology, and simulation flexibility. +| Model | Description | Recommended use cases | +|----------------|-----------------------------------------------------------------------------|----------------------------------------------------------------------------------| +| RD-V3 | Standard single-die platform with full processor and security blocks | Ideal for newcomers, firmware bring-up, and basic validation | +| RD-V3-R1 | Dual-die platform simulating chiplet-based architecture | Suitable for multi-node, interconnect, and advanced boot tests | +| RD-V3-Cfg1 (CFG1) | Lightweight model with reduced control complexity for fast startup | Best for CI pipelines, unit testing, and quick validations | +| RD-V3-Cfg2 (CFG2) | Quad-chip platform with 4×32-core Poseidon-V CPUs connected via CCG links | Designed for advanced multi-chip validation, CMN-based coherence, and scaling | -In this Learning Path you will use RD‑V3 as the primary platform for foundational exercises, guiding you through the process of building the software stack and simulating it on an FVP to verify the boot sequence. -In later modules, you’ll transition to RD‑V3‑R1 to more advanced system simulation, multi-node bring-up, and firmware coordination across components like MCP and SCP. +In this Learning Path you will use RD-V3 as the primary platform for foundational exercises, guiding you through building the software stack and simulating it on an FVP to verify the boot sequence. In later modules, you’ll transition to RD-V3-R1 for more advanced system simulation, multi-node bring-up, and firmware coordination across components like LCP and SCP. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md index eff3635e51..78dc61881a 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md @@ -1,53 +1,49 @@ --- -title: Understanding the CSS V3 Boot Flow and Firmware Stack +title: Understanding the CSS-V3 Boot Flow and Firmware Stack weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Firmware Stack Overview and Boot Sequence Coordination +## Firmware stack overview and boot sequence coordination -To ensure the platform transitions securely and reliably from power-on to operating system launch, this section introduces the roles and interactions of each firmware component within the RD‑V3 boot process. -You’ll learn how each component contributes to system initialization and how control is systematically handed off across the boot chain. +To ensure the platform transitions securely and reliably from power-on to operating system launch, this section introduces the roles and interactions of each firmware component within the RD-V3 boot process. You’ll learn how each component contributes to system initialization and how control is systematically handed off across the boot chain. +## How the system boots up -## How the System Boots Up +In the RD-V3 platform, each firmware component such as TF-A, RSE, SCP, MCP, LCP, and UEFI - operates independently but participates in a well-defined sequence. Each is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling. -In the RD‑V3 platform, each firmware component—such as TF‑A, RSE, SCP, LCP, and UEFI—operates independently but functions together through a well-defined sequence. -Each component is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling. - -The following diagram from the [Neoverse Reference Design Documentation](https://neoverse-reference-design.docs.arm.com/en/latest/shared/boot_flow/rdv3_single_chip.html?highlight=boot) illustrates the progression of component activation from initial reset to OS handoff: +The following diagram from the [Neoverse Reference Design documentation](https://neoverse-reference-design.docs.arm.com/en/latest/shared/boot_flow/rdv3_single_chip.html?highlight=boot) illustrates the progression of component activation from initial reset to OS handoff: ![img1 alt-text#center](rdf_single_chip.png "Boot Flow for RD-V3 Single Chip") -### Stage 1. Security Validation Starts First (RSE) +## Stage 1: Security validation starts (RSE) -The first firmware module triggered after BL2 is the Runtime Security Engine (RSE), executing on Cortex‑M55. RSE authenticates all critical firmware components—including SCP, UEFI, and kernel images—using secure boot mechanisms. It performs cryptographic measurements and builds a Root of Trust before allowing any other processors to start. +After BL2, the Runtime Security Engine (RSE, Cortex-M55) authenticates critical firmware components—including SCP, UEFI, and kernel images—using secure-boot mechanisms. It performs cryptographic measurements and establishes a Root of Trust (RoT) before allowing other processors to start. ***RSE acts as the platform’s security gatekeeper.*** -### Stage 2. Early Hardware Initialization (SCP / MCP) +## Stage 2: Early hardware initialization (SCP / MCP) -Once RSE completes verification, the System Control Processor (SCP) and Management Control Processor (MCP) are released from reset. +Once RSE completes verification, the System Control Processor (SCP, Cortex-M7) and the Management Control Processor (MCP, where present) are released from reset. -These controllers perform essential platform bring-up: +They perform essential bring-up: * Initialize clocks, reset lines, and power domains * Prepare DRAM and interconnect -* Enable the application cores and signal readiness to TF‑A +* Enable the application processor (AP) cores and signal readiness to TF-A ***SCP/MCP are the ground crew bringing hardware systems online.*** -### Stage 3. Secure Execution Setup (TF‑A) +## Stage 3: Secure execution setup (TF-A) -Once the AP is released, it begins executing Trusted Firmware‑A (TF‑A) at EL3, starting from the reset vector address programmed during boot image layout. -TF‑A configures the secure world, sets up exception levels, and prepares for handoff to UEFI. +When the AP is released, it begins executing Trusted Firmware-A (TF-A) at EL3 from the reset vector address programmed during boot-image layout. TF-A configures the secure world, sets up exception levels, and prepares for handoff to UEFI. -***TF‑A is the ignition controller, launching the next stages securely.*** +***TF-A is the ignition controller, launching the next stages securely.*** -### Stage 4. Firmware and Bootloader (EDK2 / GRUB) +## Stage 4: Firmware and Bootloader (EDK2 / GRUB) -TF‑A hands off control to UEFI firmware (EDK2), which performs device discovery and launches GRUB. +TF-A hands off control to UEFI firmware (EDK 2), which performs device discovery and launches GRUB. Responsibilities: * Detect and initialize memory, PCIe, and boot devices @@ -56,28 +52,27 @@ Responsibilities: ***EDK2 and GRUB are like the first- and second-stage rockets launching the payload.*** -### Stage 5. Linux Kernel Boot +## Stage 5: Linux kernel boot GRUB loads the Linux kernel and passes full control to the OS. Responsibilities: * Initialize device drivers and kernel subsystems * Mount the root filesystem -* Start user-space processes (e.g., BusyBox) - -***The Linux kernel is the spacecraft—it takes over and begins its mission.*** +* Start user-space processes (for example, BusyBox) -## Firmware Module Responsibilities in Detail +***The Linux kernel is the spacecraft - it takes over and begins its mission.*** -Now that we’ve examined the high-level boot stages, let’s break down each firmware module’s role in more detail. +## Firmware module responsibilities in detail +Now that you’ve examined the high-level boot stages, you can now break down each firmware module’s role in more detail. -Each stage of the boot chain is backed by a dedicated component—either a secure bootloader, platform controller, or operating system manager—working together to ensure a reliable system bring-up. +Each stage of the boot chain is backed by a dedicated component - secure bootloader, platform controller, or OS manager - working together to ensure reliable system bring-up. -### RSE: Runtime Security Engine (Cortex‑M55) (Stage 1: Security Validation) +## RSE: Runtime Security Engine (Cortex-M55) — (Stage 1: Security Validation) RSE firmware runs on the Cortex‑M55 and plays a critical role in platform attestation and integrity enforcement. * Authenticates BL2, SCP, and UEFI firmware images (Secure Boot) -* Records boot-time measurements (e.g., PCRs, ROT) +* Records boot-time measurements (for example, PCRs, ROT) * Releases boot authorization only after successful validation RSE acts as the second layer of the chain of trust, maintaining a monitored and secure environment throughout early boot. @@ -85,76 +80,59 @@ RSE acts as the second layer of the chain of trust, maintaining a monitored and ### SCP: System Control Processor (Cortex‑M7) (Stage 2: Early Hardware Bring-up) -SCP firmware runs on the Cortex‑M7 core and performs early hardware initialization and power domain control. * Initializes clocks, reset controllers, and system interconnect -* Manages DRAM setup and enables power for the application processor -* Coordinates boot readiness with RSE via MHU (Message Handling Unit) - -SCP is central to bring-up operations and ensures the AP starts in a stable hardware environment. +* Manages DRAM setup and enables power for the AP +* Coordinates boot readiness with RSE via the Message Handling Unit (MHU) -### TF-A: Trusted Firmware-A (BL1 / BL2) (Stage 3: Secure Execution Setup) +### TF-A: Trusted Firmware-A (BL1 / BL2) — Stage 3 -TF‑A is the entry point of the boot chain and is responsible for establishing the system’s root of trust. -* BL1 (Boot Loader Stage 1): Executes from ROM, initializing minimal hardware such as clocks and serial interfaces, and loads BL2. -* BL2 (Boot Loader Stage 2): Validates and loads SCP, RSE, and UEFI images, setting up secure handover to later stages. +* **BL1** executes from ROM, initializes minimal hardware (clocks, UART), and loads BL2 +* **BL2** validates and loads SCP, RSE, and UEFI images, setting up secure handover to later stages -TF‑A ensures all downstream components are authenticated and loaded from trusted sources, laying the foundation for a secure boot. +TF-A establishes the system’s chain of trust and ensures downstream components are authenticated and loaded from trusted sources. +### UEFI / GRUB / Linux kernel — Stages 4–5 -### UEFI / GRUB / Linux Kernel (Stage 4–5: Bootloader and OS Handoff) +* **UEFI (EDK II):** firmware abstraction, hardware discovery, ACPI table generation +* **GRUB:** selects and loads the Linux kernel image +* **Linux kernel:** initializes the OS, drivers, and launches userland (for example, BusyBox) -After SCP powers on the application processor, control passes to the main bootloader and operating system: -* UEFI (EDK2): Provides firmware abstraction, hardware discovery, and ACPI table generation -* GRUB: Selects and loads the Linux kernel image -* Linux Kernel: Initializes the OS, drivers, and launches the userland (e.g., BusyBox) +On the FVP you can observe this process via UART logs to validate each stage. -On the FVP, you can observe this process via UART logs, helping validate each stage’s success. +### LCP: Low-Power Controller (optional) - -### LCP: Low Power Controller (Optional Component) - -If present in the configuration, LCP handles platform power management at a finer granularity: +If present, the LCP provides fine-grained platform power management: * Implements sleep/wake transitions * Controls per-core power gating -* Manages transitions to ACPI power states (e.g., S3, S5) - -LCP support depends on the FVP model and may be omitted in simplified virtual setups. +* Manages transitions to ACPI power states (for example, S3, S5) +LCP support depends on the FVP model and may be omitted in simplified setups. -### Coordination and Handoff Logic +## Coordination and handoff logic -The RD‑V3 boot sequence follows a multi-stage, dependency-driven handshake model, where each firmware module validates, powers, or authorizes the next. +The RD-V3 boot sequence follows a multi-stage, dependency-driven handshake model, where each firmware module validates, powers, or authorizes the next. -| Stage | Dependency Chain | Description | -|-------|----------------------|-------------------------------------------------------------------------| -| 1 | RSE ← BL2 | RSE is loaded and triggered by BL2 to begin security validation | -| 2 | SCP ← BL2 + RSE | SCP initialization requires both BL2 and authorization from RSE | -| 3 | AP ← SCP + RSE | The application processor starts only after SCP sets power and RSE permits | -| 4 | UEFI → GRUB → Linux | UEFI launches GRUB, which loads the kernel and enters the OS | +| Stage | Dependency chain | Description | +|------:|----------------------|-------------------------------------------------------------------------------| +| 1 | RSE ← BL2 | RSE is loaded and triggered by BL2 to begin security validation | +| 2 | SCP ← BL2 + RSE | SCP initialization requires BL2 and authorization from RSE | +| 3 | AP ← SCP + RSE | The AP starts only after SCP sets power and RSE permits | +| 4 | UEFI → GRUB → Linux | UEFI launches GRUB, which loads the kernel and enters the OS | -This handshake model ensures that no firmware stage proceeds unless its dependencies have securely initialized and authorized the next step. +This handshake ensures no stage proceeds unless its dependencies have securely initialized and authorized the next step. {{% notice Note %}} -In the table above, arrows (←) represent **dependency relationships**—the component on the left **depends on** the component(s) on the right to be triggered or authorized. -For example, `RSE ← BL2` means that RSE is loaded and triggered by BL2; -`AP ← SCP + RSE` means the application processor can only start after SCP has initialized the hardware and RSE has granted secure boot authorization. -These arrows do not represent execution order but indicate **which component must be ready for another to begin**. +In the table, arrows (←) indicate **dependency**—the component on the left depends on the component(s) on the right to be triggered or authorized. +For example, `RSE ← BL2` means BL2 loads/triggers RSE; `AP ← SCP + RSE` means the AP can start only after SCP has initialized hardware and RSE has granted authorization. +The right-facing arrows in `UEFI → GRUB → Linux` indicate a **direct execution path**—each stage passes control directly to the next. {{% /notice %}} -{{% notice Note %}} -Once the firmware stack reaches UEFI, it performs hardware discovery and launches GRUB. -GRUB then selects and boots the Linux kernel. Unlike the previous dependency arrows (←), this is a **direct execution path**—each stage passes control directly to the next. -{{% /notice %}} - -This layered approach supports modular testing, independent debugging, and early-stage simulation—all essential for secure and robust platform bring-up. - - -In this section, you have: +This layered approach supports modular testing, independent debugging, and early simulation—essential for secure and robust platform bring-up. -* Explored the full boot sequence of the RD‑V3 platform, from power-on to Linux login -* Understood the responsibilities of key firmware components such as TF‑A, RSE, SCP, LCP, and UEFI -* Learned how secure boot is enforced and how each module hands off control to the next +**In this section, you have:** +* Explored the full boot sequence of the RD-V3 platform, from power-on to Linux login +* Understood the responsibilities of TF-A, RSE, SCP, MCP, LCP, and UEFI +* Learned how secure boot is enforced and how each module hands off control * Interpreted boot dependencies using FVP simulation and UART logs -With an understanding of full boot sequence and firmware responsibilities, you're ready to apply these insights. -In the next section, you'll fetch the RD‑V3 codebase and start building the firmware stack for simulation. +With an understanding of the full boot sequence and firmware responsibilities, you’re ready to apply these insights. In the next section, you’ll fetch the RD-V3 codebase and start building the firmware stack for simulation. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md index 473bd2f67e..13dd130065 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md @@ -1,25 +1,21 @@ --- title: CSS-V3 Pre-Silicon Software Development Using Neoverse Servers -draft: true -cascade: - draft: true - minutes_to_complete: 90 -who_is_this_for: This Learning Path is for firmware developers, system architects, and silicon validation engineers building Arm Neoverse CSS platforms. It focuses on pre-silicon development using Fixed Virtual Platforms (FVPs) for the CSS‑V3 reference design. You’ll learn how to build, customize, and validate firmware on the RD‑V3 platform using Fixed Virtual Platforms (FVPs) before hardware is available. +who_is_this_for: This Learning Path is for firmware developers, system architects, and silicon validation engineers building Arm Neoverse CSS platforms. It focuses on pre-silicon development for the CSS-V3 reference design using Fixed Virtual Platforms (FVPs). You’ll build, customize, and validate firmware on the RD-V3 platform before hardware is available. learning_objectives: - - Understand the architecture of Arm Neoverse CSS‑V3 as the foundation for scalable server-class platforms - - Build and boot the RD‑V3 firmware stack using TF‑A, SCP, RSE, and UEFI + - Understand the architecture of Arm Neoverse CSS-V3 as the foundation for scalable server-class platforms + - Build and boot the RD-V3 firmware stack using TF-A, SCP, RSE, and UEFI - Simulate multi-core, multi-chip systems with Arm FVP models and interpret boot logs - - Modify platform control firmware to test custom logic and validate it via pre-silicon simulation + - Modify platform control firmware to test custom logic and validate via pre-silicon simulation prerequisites: - - Access to an Arm Neoverse-based Linux machine (cloud or local), with at least 80 GB of storage + - Access to an Arm Neoverse-based Linux machine (cloud or local) with at least 80 GB of free storage - Familiarity with Linux command-line tools and basic scripting - Understanding of firmware boot stages and SoC-level architecture - - Docker installed, or GitHub Codespaces-compatible development environment + - Docker installed, or a GitHub Codespaces-compatible development environment author: - Odin Shen From 415198b2c387db7688a5df01b3d1f689c9f71cc4 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 9 Sep 2025 14:21:56 +0000 Subject: [PATCH 2/7] Continuing content dev --- .../neoverse-rdv3-swstack/2_rdv3_bootseq.md | 22 +++--- .../neoverse-rdv3-swstack/3_rdv3_sw_build.md | 75 ++++++++----------- 2 files changed, 44 insertions(+), 53 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md index 78dc61881a..7ad0d8b4a0 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md @@ -12,7 +12,7 @@ To ensure the platform transitions securely and reliably from power-on to operat ## How the system boots up -In the RD-V3 platform, each firmware component such as TF-A, RSE, SCP, MCP, LCP, and UEFI - operates independently but participates in a well-defined sequence. Each is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling. +In the RD-V3 platform, each firmware component such as TF-A, RSE, SCP, MCP, LCP, and UEFI operates independently but participates in a well-defined sequence. Each is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling. The following diagram from the [Neoverse Reference Design documentation](https://neoverse-reference-design.docs.arm.com/en/latest/shared/boot_flow/rdv3_single_chip.html?highlight=boot) illustrates the progression of component activation from initial reset to OS handoff: @@ -20,7 +20,7 @@ The following diagram from the [Neoverse Reference Design documentation](https:/ ## Stage 1: Security validation starts (RSE) -After BL2, the Runtime Security Engine (RSE, Cortex-M55) authenticates critical firmware components—including SCP, UEFI, and kernel images—using secure-boot mechanisms. It performs cryptographic measurements and establishes a Root of Trust (RoT) before allowing other processors to start. +After BL2, the Runtime Security Engine (RSE, Cortex-M55) authenticates critical firmware components that include SCP, UEFI, and kernel images, using secure-boot mechanisms. It performs cryptographic measurements and establishes a Root of Trust (RoT) before allowing other processors to start. ***RSE acts as the platform’s security gatekeeper.*** @@ -41,7 +41,7 @@ When the AP is released, it begins executing Trusted Firmware-A (TF-A) at EL3 fr ***TF-A is the ignition controller, launching the next stages securely.*** -## Stage 4: Firmware and Bootloader (EDK2 / GRUB) +## Stage 4: Firmware and Bootloader (EDK2/GRUB) TF-A hands off control to UEFI firmware (EDK 2), which performs device discovery and launches GRUB. @@ -63,10 +63,10 @@ Responsibilities: ***The Linux kernel is the spacecraft - it takes over and begins its mission.*** -## Firmware module responsibilities in detail -Now that you’ve examined the high-level boot stages, you can now break down each firmware module’s role in more detail. +## In detail: firmware module responsibilities +Now that you’ve examined the high-level boot stages, you can now examine each firmware module’s role in more detail. -Each stage of the boot chain is backed by a dedicated component - secure bootloader, platform controller, or OS manager - working together to ensure reliable system bring-up. +Each stage of the boot chain is backed by a dedicated component, such as secure bootloader, platform controller, or OS manager, and they work together to ensure reliable system bring-up. ## RSE: Runtime Security Engine (Cortex-M55) — (Stage 1: Security Validation) @@ -84,20 +84,20 @@ RSE acts as the second layer of the chain of trust, maintaining a monitored and * Manages DRAM setup and enables power for the AP * Coordinates boot readiness with RSE via the Message Handling Unit (MHU) -### TF-A: Trusted Firmware-A (BL1 / BL2) — Stage 3 +### TF-A: Trusted Firmware-A (BL1/BL2) - Stage 3 * **BL1** executes from ROM, initializes minimal hardware (clocks, UART), and loads BL2 * **BL2** validates and loads SCP, RSE, and UEFI images, setting up secure handover to later stages TF-A establishes the system’s chain of trust and ensures downstream components are authenticated and loaded from trusted sources. -### UEFI / GRUB / Linux kernel — Stages 4–5 +### UEFI, GRUB, and the Linux kernel — Stages 4–5 * **UEFI (EDK II):** firmware abstraction, hardware discovery, ACPI table generation * **GRUB:** selects and loads the Linux kernel image * **Linux kernel:** initializes the OS, drivers, and launches userland (for example, BusyBox) -On the FVP you can observe this process via UART logs to validate each stage. +On the FVP you can see this process through UART logs to validate each stage. ### LCP: Low-Power Controller (optional) @@ -106,7 +106,7 @@ If present, the LCP provides fine-grained platform power management: * Controls per-core power gating * Manages transitions to ACPI power states (for example, S3, S5) -LCP support depends on the FVP model and may be omitted in simplified setups. +LCP support depends on the FVP model and can be omitted in simplified setups. ## Coordination and handoff logic @@ -122,7 +122,7 @@ The RD-V3 boot sequence follows a multi-stage, dependency-driven handshake model This handshake ensures no stage proceeds unless its dependencies have securely initialized and authorized the next step. {{% notice Note %}} -In the table, arrows (←) indicate **dependency**—the component on the left depends on the component(s) on the right to be triggered or authorized. +In the table, arrows (←) indicate **dependency** - the component on the left depends on the component(s) on the right to be triggered or authorized. For example, `RSE ← BL2` means BL2 loads/triggers RSE; `AP ← SCP + RSE` means the AP can start only after SCP has initialized hardware and RSE has granted authorization. The right-facing arrows in `UEFI → GRUB → Linux` indicate a **direct execution path**—each stage passes control directly to the next. {{% /notice %}} diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md index 75a0c1de08..be5d9f4feb 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md @@ -1,36 +1,33 @@ --- -title: Build the RD‑V3 Reference Platform Software Stack +title: Build the RD-V3 Reference Platform Software Stack weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Building the RD‑V3 Reference Platform Software Stack - -In this module, you’ll set up your development environment on any Arm-based server and build the firmware stack required to simulate the RD‑V3 platform. This Learning Path was tested on an AWS `m7g.4xlarge` Arm-based instance running Ubuntu 22.04 +## Building the RD-V3 Reference Platform Software Stack +In this module, you’ll set up your development environment on any Arm-based server and build the firmware stack required to simulate the RD-V3 platform. This Learning Path was tested on an AWS `m7g.4xlarge` Arm-based instance running Ubuntu 22.04. ### Step 1: Prepare the Development Environment -First, ensure your system is up-to-date and install the required tools and libraries: +First, ensure your system is up to date and install the required tools and libraries: ```bash sudo apt update -sudo apt install curl git +sudo apt install -y curl git +``` +Configure git as follows: ``` - -Configure git as follows. - -```bash git config --global user.name "" git config --global user.email "" ``` -### Step 2: Fetch the Source Code +### Step 2: Fetch the source code -The RD‑V3 platform firmware stack consists of many independent components—such as TF‑A, SCP, RSE, UEFI, Linux kernel, and Buildroot. Each component is maintained in a separate Git repository. To manage and synchronize these repositories efficiently, we use the `repo` tool. It simplifies syncing the full platform software stack from multiple upstreams. +The RD‑V3 platform firmware stack consists of many independent components, such as TF‑A, SCP, RSE, UEFI, Linux kernel, and Buildroot. Each component is maintained in a separate Git repository. To manage and synchronize these repositories efficiently, use the `repo` tool. It simplifies syncing the full platform software stack from multiple upstreams. -If repo is not installed, you can download it manually: +If `repo` is not installed, you can download it and add it to your `PATH`: ```bash mkdir -p ~/.bin @@ -39,11 +36,9 @@ curl https://storage.googleapis.com/git-repo-downloads/repo > ~/.bin/repo chmod a+rx ~/.bin/repo ``` -Once ready, create a workspace and initialize the repo manifest: +Once ready, create a workspace and initialize the repo manifest. This Learning Path uses a pinned manifest to ensure reproducibility across different environments. This locks all component repositories to known-good commits that are validated and aligned with a specific FVP version. -We use a pinned manifest to ensure reproducibility across different environments. This locks all component repositories to known-good commits that are validated and aligned with a specific FVP version. - -For this session, we will use `pinned-rdv3.xml` and `RD-INFRA-2025.07.03`. +For this session, use `pinned-rdv3.xml` and `RD-INFRA-2025.07.03`: ```bash cd ~ @@ -63,11 +58,10 @@ Syncing: 100% (83/83) 2:52 | 1 job | 0:01 platsw/edk2-platforms @ uefi/edk2/edk2 ``` {{% notice Note %}} -As of the time of writing, the latest official release tag is RD-INFRA-2025.07.03. -Please note that newer tags may be available as future platform updates are published. +As of the time of writing, the latest release tag is `RD-INFRA-2025.07.03`. Newer tags might be available in future updates. {{% /notice %}} -This manifest will fetch all required sources including: +This manifest fetches the required sources,including: * TF‑A * SCP / RSE firmware * EDK2 (UEFI) @@ -84,7 +78,7 @@ There are two supported methods for building the reference firmware stack: **hos In this Learning Path, you will use the **container-based** approach. -The container image is designed to use the source directory from the host (`~/rdv3`) and perform the build process inside the container. Make sure Docker is installed on your Linux machine. You can follow this [installation guide](https://learn.arm.com/install-guides/docker/). +The container image uses your host source directory (~/rdv3) and performs the build inside Docker. Ensure Docker is installed on your machine. You can follow this [installation guide](https://learn.arm.com/install-guides/docker/). After Docker is installed, you’re ready to build the container image. @@ -104,7 +98,9 @@ To build the container image: ./container.sh build ``` -The build procedure may take a few minutes, depending on network bandwidth and CPU performance. This Learning Path was tested on an AWS `m7g.4xlarge` instance, and the build took 250 seconds. The output from the build looks like: +The build procedure can take a few minutes, depending on network bandwidth and CPU performance. This Learning Path was tested on an AWS `m7g.4xlarge` instance, and the build took 250 seconds. + +Expected output: ```output Building docker image: rdinfra-builder ... @@ -142,7 +138,7 @@ Building docker image: rdinfra-builder ... => => naming to docker.io/library/rdinfra-builder 0.0s ``` -Verify the docker image build completed successfully: +Verify the image: ```bash docker images @@ -155,14 +151,13 @@ REPOSITORY TAG IMAGE ID CREATED SIZE rdinfra-builder latest 3a395c5a0b60 4 minutes ago 8.12GB ``` -To quickly test the Docker image you just built, run the following command to enter the docker container interactively: +Quick interactive test: ```bash ./container.sh -v ~/rdv3 run ``` -This script mounts your source directory (~/rdv3) into the container and opens a shell session at that location. -Inside the container, you should see a prompt like this: +This script mounts your source directory (~/rdv3) into the container and opens a shell session at that location. Inside the container, you should see a prompt like this: ```output Running docker image: rdinfra-builder ... @@ -172,18 +167,17 @@ See "man sudo_root" for details. your-username:hostname:/home/your-username/rdv3$ ``` -You can explore the container environment if you wish, then type exit to return to the host system. - +You can explore the container environment if you wish, then type `exit` to return to the host. -### Step 4: Build Firmware -Building the full firmware stack involves compiling several components and preparing them for simulation. Rather than running each step manually, you can use a single Docker command to automate the build and package phases. +### Step 4: Build firmware -- **build**: This phase compiles all individual components of the firmware stack, including TF‑A, SCP, RSE, UEFI, Linux kernel, and rootfs. +Building the full firmware stack involves compiling several components and packaging them for simulation. The following command runs build and then package inside the Docker image: -- **package**: This phase consolidates the build outputs into simulation-ready formats and organizes boot artifacts for FVP. +- **build**compiles all individual components of the firmware stack, including TF‑A, SCP, RSE, UEFI, Linux kernel, and rootfs +- **package** consolidates outputs into simulation-ready artifacts for FVP -Ensure you’re back in the host OS, then run the following command: +Ensure you’re back in the host OS, then run: ```bash cd ~/rdv3 @@ -201,13 +195,13 @@ docker run --rm \ The build artifacts will be placed under `~/rdv3/output/rdv3/rdv3/`, where the last `rdv3` in the directory path corresponds to the selected platform name. -After a successful build, inspect the artifacts generated under `~/rdv3/output/rdv3/rdv3/` +Inspect the artifacts: ```bash ls ~/rdv3/output/rdv3/rdv3 -al ``` -The directory contents should look like: +Expected output: ```output total 7092 drwxr-xr-x 2 ubuntu ubuntu 4096 Aug 12 13:15 . @@ -229,7 +223,7 @@ lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 12 13:15 tf_m_vm0_0.bin -> ../components/ lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 12 13:15 tf_m_vm1_0.bin -> ../components/arm/rse/neoverse_rd/rdv3/vm1_0.bin lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 12 13:15 uefi.bin -> ../components/css-common/uefi.bin ``` -Here's a reference of what each file refers to: +Reference mapping: | Component | Output Files | Description | |----------------------|----------------------------------------------|-----------------------------| @@ -240,9 +234,9 @@ Here's a reference of what each file refers to: | Initrd | `rootfs.cpio.gz` | Minimal filesystem | -### Optional: Run the Build Manually from Inside the Container +### Optional: run the build manually from inside the container -You can also perform the build manually after entering the container: +You can also build from within an interactive container session (useful for debugging or partial builds): Start your docker container. In your running container shell: ```bash @@ -251,7 +245,4 @@ cd ~/rdv3 ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 package ``` -This manual workflow is useful for debugging, partial builds, or making custom modifications to individual components. - - -You’ve now successfully prepared and built the full RD‑V3 firmware stack. In the next section, you’ll install the appropriate FVP and simulate the full boot sequence, bringing the firmware to life on a virtual platform. +You’ve now prepared and built the full RD-V3 firmware stack. In the next section, you’ll install the appropriate FVP and simulate the full boot sequence, bringing the firmware to life on a virtual platform. From 74444927677eb9e3f3072a53119b72cccb3b5e3f Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 9 Sep 2025 20:49:18 +0000 Subject: [PATCH 3/7] Further updates --- .../1_introduction_rdv3.md | 8 +- .../neoverse-rdv3-swstack/2_rdv3_bootseq.md | 16 ++-- .../neoverse-rdv3-swstack/3_rdv3_sw_build.md | 14 ++-- .../neoverse-rdv3-swstack/4_rdv3_on_fvp.md | 80 +++++++++---------- .../neoverse-rdv3-swstack/5_rdv3_modify.md | 10 +-- 5 files changed, 63 insertions(+), 65 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md index 73da1b429d..f4d47bd8d1 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md @@ -14,9 +14,9 @@ Arm Neoverse is designed for the demanding requirements of data-center and edge Using Arm Fixed Virtual Platforms (FVPs), you can explore system bring-up, boot flow, and firmware customization well before physical silicon becomes available. -This module also introduces the key components involved, from Neoverse V3 cores to secure subsystem controllers, and shows how these elements work together in a fully virtualized system simulation. +This Learning Path also introduces the key components involved, from Neoverse V3 cores to secure subsystem controllers, and shows how these elements work together in a fully virtualized system simulation. -### Neoverse CSS-V3 platform overview +## Neoverse CSS-V3 platform overview [Neoverse CSS-V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) (Compute Subsystem Version 3) is the core subsystem architecture underpinning the Arm RD-V3 platform. It is optimized for high-performance server and data-center applications, providing an integrated solution that combines processing cores, memory management, and interconnect technology. @@ -32,7 +32,7 @@ Key features of CSS-V3 include: The CSS-V3 subsystem is fully supported by Arm’s Fixed Virtual Platforms (FVPs), enabling pre-silicon testing of these capabilities. -### RD-V3 platform introduction +## RD-V3 platform introduction The RD-V3 platform is a comprehensive reference design built around Arm’s [Neoverse V3](https://www.arm.com/products/silicon-ip-cpu/neoverse/neoverse-v3) CPUs, along with [Cortex-M55](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55) and [Cortex-M7](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m7) microcontrollers. This platform enables efficient high-performance computing and robust platform management: @@ -64,7 +64,7 @@ Key capabilities of FVPs: FVPs enable developers to verify boot sequences, debug firmware handoffs, and even simulate RSE (Runtime Security Engine) behaviors, all pre-silicon. -### Comparing RD-V3 FVP variants +## Comparing RD-V3 FVP variants To support different use cases and levels of platform complexity, Arm offers several virtual models based on the CSS-V3 architecture: RD-V3, RD-V3-R1, RD-V3-Cfg1 (CFG1), and RD-V3-Cfg2 (CFG2). While they share a common foundation, they differ in chip count, system topology, and simulation flexibility. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md index 7ad0d8b4a0..ccde5e9e5c 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md @@ -1,5 +1,5 @@ --- -title: Understanding the CSS-V3 Boot Flow and Firmware Stack +title: Understand the CSS-V3 boot flow and firmware stack weight: 3 ### FIXED, DO NOT MODIFY @@ -41,19 +41,19 @@ When the AP is released, it begins executing Trusted Firmware-A (TF-A) at EL3 fr ***TF-A is the ignition controller, launching the next stages securely.*** -## Stage 4: Firmware and Bootloader (EDK2/GRUB) +## Stage 4: Firmware and Bootloader (EDK II/GRUB) -TF-A hands off control to UEFI firmware (EDK 2), which performs device discovery and launches GRUB. +TF-A hands off control to UEFI firmware (EDK II), which performs device discovery and launches GRUB. Responsibilities: * Detect and initialize memory, PCIe, and boot devices * Generate ACPI and platform configuration tables * Locate and launch GRUB from storage or flash -***EDK2 and GRUB are like the first- and second-stage rockets launching the payload.*** +***EDK II and GRUB are like the first- and second-stage rockets launching the payload.*** ## Stage 5: Linux kernel boot - + GRUB loads the Linux kernel and passes full control to the OS. Responsibilities: @@ -68,7 +68,7 @@ Now that you’ve examined the high-level boot stages, you can now examine each Each stage of the boot chain is backed by a dedicated component, such as secure bootloader, platform controller, or OS manager, and they work together to ensure reliable system bring-up. -## RSE: Runtime Security Engine (Cortex-M55) — (Stage 1: Security Validation) +### RSE: Runtime Security Engine (Cortex-M55) (Stage 1: Security Validation) RSE firmware runs on the Cortex‑M55 and plays a critical role in platform attestation and integrity enforcement. * Authenticates BL2, SCP, and UEFI firmware images (Secure Boot) @@ -84,14 +84,14 @@ RSE acts as the second layer of the chain of trust, maintaining a monitored and * Manages DRAM setup and enables power for the AP * Coordinates boot readiness with RSE via the Message Handling Unit (MHU) -### TF-A: Trusted Firmware-A (BL1/BL2) - Stage 3 +## TF-A: Trusted Firmware-A (BL1/BL2) (Stage 3) * **BL1** executes from ROM, initializes minimal hardware (clocks, UART), and loads BL2 * **BL2** validates and loads SCP, RSE, and UEFI images, setting up secure handover to later stages TF-A establishes the system’s chain of trust and ensures downstream components are authenticated and loaded from trusted sources. -### UEFI, GRUB, and the Linux kernel — Stages 4–5 +### UEFI, GRUB, and the Linux kernel (Stages 4–5) * **UEFI (EDK II):** firmware abstraction, hardware discovery, ACPI table generation * **GRUB:** selects and loads the Linux kernel image diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md index be5d9f4feb..04f6b9056d 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md @@ -9,7 +9,7 @@ layout: learningpathall In this module, you’ll set up your development environment on any Arm-based server and build the firmware stack required to simulate the RD-V3 platform. This Learning Path was tested on an AWS `m7g.4xlarge` Arm-based instance running Ubuntu 22.04. -### Step 1: Prepare the Development Environment +## Step 1: Prepare the Development Environment First, ensure your system is up to date and install the required tools and libraries: @@ -23,7 +23,7 @@ git config --global user.name "" git config --global user.email "" ``` -### Step 2: Fetch the source code +## Step 2: Fetch the source code The RD‑V3 platform firmware stack consists of many independent components, such as TF‑A, SCP, RSE, UEFI, Linux kernel, and Buildroot. Each component is maintained in a separate Git repository. To manage and synchronize these repositories efficiently, use the `repo` tool. It simplifies syncing the full platform software stack from multiple upstreams. @@ -61,15 +61,15 @@ Syncing: 100% (83/83) 2:52 | 1 job | 0:01 platsw/edk2-platforms @ uefi/edk2/edk2 As of the time of writing, the latest release tag is `RD-INFRA-2025.07.03`. Newer tags might be available in future updates. {{% /notice %}} -This manifest fetches the required sources,including: +This manifest fetches the required sources, including: * TF‑A * SCP / RSE firmware -* EDK2 (UEFI) +* EDK II (UEFI) * Linux kernel * Buildroot and platform scripts -### Step 3: Build the Docker Image +## Step 3: Build the Docker Image There are two supported methods for building the reference firmware stack: **host-based** and **container-based**. @@ -170,7 +170,7 @@ your-username:hostname:/home/your-username/rdv3$ You can explore the container environment if you wish, then type `exit` to return to the host. -### Step 4: Build firmware +## Step 4: Build firmware Building the full firmware stack involves compiling several components and packaging them for simulation. The following command runs build and then package inside the Docker image: @@ -234,7 +234,7 @@ Reference mapping: | Initrd | `rootfs.cpio.gz` | Minimal filesystem | -### Optional: run the build manually from inside the container +## Optional: run the build manually from inside the container You can also build from within an interactive container session (useful for debugging or partial builds): diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md index d773322a21..2ef28d520c 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md @@ -1,47 +1,45 @@ --- -title: Simulate RD‑V3 Boot Flow on Arm FVP +title: Simulate RD-V3 Boot Flow on Arm FVP weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Simulating RD‑V3 with an Arm FVP +## Simulating RD-V3 with an Arm FVP -In the previous section, you built the complete CSS‑V3 firmware stack. -Now, you’ll use Arm Fixed Virtual Platform (FVP) to simulate the system, allowing you to verify the boot sequence without any physical silicon. -This simulation brings up the full stack from BL1 to Linux shell using Buildroot. +In the previous section, you built the complete CSS-V3 firmware stack. +Now you’ll use an Arm Fixed Virtual Platform (FVP) to simulate the system, allowing you to verify the boot sequence without any physical silicon. +This simulation brings up the full stack from BL1 to a Linux shell using Buildroot. -### Step 1: Download and Install the FVP Model +## Step 1: Download and Install the FVP Model -Before downloading the RD‑V3 FVP, it’s important to understand that each reference design release tag corresponds to a specific version of the FVP model. +Each reference design release tag corresponds to a specific FVP model version. +For example, the **RD-INFRA-2025.07.03** tag is designed to work with **FVP version 11.29.35**. -For example, the **RD‑INFRA‑2025.07.03** release tag is designed to work with **FVP version 11.29.35**. +See the [RD-V3 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) for a full list of release tags, corresponding FVP versions, and their associated release notes, which summarize changes and validated test cases. -You can refer to the [RD-V3 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) for a full list of release tags, corresponding FVP versions, and their associated release notes, which summarize changes and validated test cases. - -Download the matching FVP binary for your selected release tag using the link provided: +Download and install the matching FVP: ```bash mkdir -p ~/fvp cd ~/fvp wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastructure/RD-V3/FVP_RD_V3_11.29_35_Linux64_armv8l.tgz - tar -xvf FVP_RD_V3_11.29_35_Linux64_armv8l.tgz ./FVP_RD_V3.sh ``` -The FVP installation may prompt you with a few questions,choosing the default options is sufficient for this learning path. By default, the FVP will be installed in `/home/ubuntu/FVP_RD_V3`. +The FVP installation might prompt you with a few questions,choosing the defaults is sufficient for this Learning Path. By default, the FVP installs under `/home/ubuntu/FVP_RD_V3`. -### Step 2: Remote Desktop Set Up +## Step 2: remote desktop setup -The RD‑V3 FVP model launches multiple UART consoles—each mapped to a separate terminal window for different subsystems (e.g., Neoverse V3, Cortex‑M55, Cortex‑M7, panel). +The RD‑V3 FVP model launches multiple UART consoles. Each console is mapped to a separate terminal window for different subsystems (for example, Neoverse V3, Cortex‑M55, Cortex‑M7, panel). If you’re accessing the platform over SSH, these UART consoles can still be displayed, but network latency and graphical forwarding can severely degrade performance. To interact with different UARTs more efficiently, it is recommend to install a remote desktop environment using `XRDP`. This provides a smoother user experience when dealing with multiple terminal windows and system interactions. -You will need to install the required packages: +Install required packages and enable XRDP: ```bash @@ -52,44 +50,43 @@ sudo systemctl enable --now xrdp To allow remote desktop connections, you need to open port 3389 (RDP) in your AWS EC2 security group: - Go to the EC2 Dashboard → Security Groups -- Select the security group associated with your instance -- Under the Inbound rules tab, click Edit inbound rules +- Select your instance’s group → **Inbound rules** → **Edit inbound rules** +- Add a rule: Type: RDP, Port: 3389, Source: your public IP (recommended) - Add the following rule: - - Type: RDP - - Port: 3389 - - Source: your local machine IP + - **Type**: RDP + - **Port**: 3389 + - **Source**: your local machine IP For better security, limit the source to your current public IP instead of 0.0.0.0/0. -***Switch to Xorg (required on Ubuntu 22.04):*** +## Switch to Xorg (required on Ubuntu 22.04) Wayland is the default display server on Ubuntu 22.04, but it is not compatible with XRDP. -To enable XRDP remote sessions, you need to switch to Xorg by modifying the GDM configuration. +To enable XRDP remote sessions, you must switch to Xorg by modifying the GDM configuration. -Open the `/etc/gdm3/custom.conf` in a text editor. -Find the line: +Open the `/etc/gdm3/custom.conf` in a text editor. Find the line: ```output #WaylandEnable=false ``` -Uncomment it by removing the # so it becomes: +Uncomment it: ```output WaylandEnable=false ``` -Then restart the GDM display manager for the change to take effect: +Restart the GDM display manager: ```bash sudo systemctl restart gdm3 ``` -After reboot, XRDP will use Xorg and you should be able to connect to the Arm server via Remote Desktop. +After restart, XRDP sessions will use Xorg and you can connect to it in the Arm server using Remote Desktop. -### Step 3: Launch the Simulation +## Step 3: launch the simulation -Once connected via Remote Desktop, open a terminal and launch the RD‑V3 FVP simulation: +Once connected using Remote Desktop, open a terminal and launch the RD‑V3 FVP simulation: ```bash cd ~/rdv3/model-scripts/rdinfra @@ -97,26 +94,27 @@ export MODEL=/home/ubuntu/FVP_RD_V3/models/Linux64_armv8l_GCC-9.3/FVP_RD_V3 ./boot-buildroot.sh -p rdv3 & ``` -The command will launch the simulation and open multiple xterm windows, each corresponding to a different CPU. -You can start by locating the ***terminal_ns_uart0*** window — in it, you should see the GRUB menu. +The command launches the simulation and opens multiple xterm windows, each corresponding to a different CPU. + +Start by locating the ***terminal_ns_uart0*** window. In it, you should see the GRUB menu. -From there, select RD-V3 Buildroot in the GRUB menu and press Enter to proceed. +Select **RD-V3 Buildroot** in the GRUB menu and press **Enter** to proceed. ![img3 alt-text#center](rdv3_sim_run.jpg "GRUB Menu") -Booting Buildroot will take a little while — you’ll see typical Linux boot messages scrolling through. +Booting Buildroot takes a short while as Linux messages scroll by. + Eventually, the system will stop at the `Welcome to Buildroot` message on the ***terminal_ns_uart0*** window. -At the `buildroot login:` prompt, type `root` and press Enter to log in. -![img4 alt-text#center](rdv3_sim_login.jpg "Buildroot login") +Log in at the `buildroot login:` prompt with user `root`. -Congratulations — you’ve successfully simulated the boot process of the RD-V3 software you compiled earlier, all on FVP! +![img4 alt-text#center](rdv3_sim_login.jpg "Buildroot login") -### Step 4: Understand the UART Outputs +Congratulations - you’ve now successfully simulated the boot of the RD-V3 software you built earlier, all on FVP! -When you launch the RD‑V3 FVP model, it opens multiple terminal windows—each connected to a different UART channel. -These UARTs provide console logs from various firmware components across the system. +## Step 4: Understand the UART Outputs -Below is the UART-to-terminal mapping based on the default FVP configuration: +The RD-V3 FVP opens multiple terminals, each connected to a different UART that carries logs from specific firmware components. +UART-to-terminal mapping based on the default FVP configuration: | Terminal Window Title | UART | Output Role | Connected Processor | |----------------------------|------|------------------------------------|-----------------------| diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md index ca1d9d1bb6..2caa8c3750 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md @@ -21,7 +21,7 @@ The RD‑V3‑R1 platform is a dual-chip simulation environment built to model m - Adds MCP (Cortex‑M7) to support cross-die management - More complex power/reset coordination -### Step 1: Clone the RD‑V3‑R1 Firmware Stack +## Step 1: Clone the RD‑V3‑R1 Firmware Stack Initialize and sync the codebase for RD‑V3‑R1: @@ -33,7 +33,7 @@ repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-r repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle ``` -### Step 2: Install RD-V3-R1 FVP +## Step 2: Install RD-V3-R1 FVP Refer to the [RD-V3-R1 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) to determine which FVP model version matches your selected release tag. Then download and install the corresponding FVP binary. @@ -46,7 +46,7 @@ tar -xvf FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz ./FVP_RD_V3_R1.sh ``` -### Step 3: Build the Firmware +## Step 3: Build the Firmware Since you have already created the Docker image for firmware building in a previous section, there is no need to rebuild it for RD‑V3‑R1. @@ -66,7 +66,7 @@ docker run --rm \ ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 package" ``` -### Step 4: Launch the Simulation +## Step 4: Launch the Simulation Once connected via Remote Desktop, open a terminal and launch the RD‑V3‑R1 FVP simulation: @@ -84,7 +84,7 @@ You’ll observe additional UART consoles for components like the MCP, and you c Similar to the previous session, the terminal logs are stored in `~/rdv3r1/model-scripts/rdinfra/platforms/rdv3r1/rdv3r1`. -### Step 5: Customize Firmware and Confirm MCP Execution +## Step 5: Customize Firmware and Confirm MCP Execution To wrap up this learning path, let’s verify that your firmware changes can be compiled and simulated successfully within the RD‑V3‑R1 environment. From 26767c2d0ad583a7c5fd7c81fb48a77d8c2e2a48 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 10 Sep 2025 06:09:03 +0000 Subject: [PATCH 4/7] Further enhancements --- .../neoverse-rdv3-swstack/5_rdv3_modify.md | 66 ++++++++----------- 1 file changed, 29 insertions(+), 37 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md index 2caa8c3750..e5abba844f 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md @@ -6,39 +6,39 @@ weight: 6 layout: learningpathall --- -## Build and Run RDV3-R1 Dual Chip Platform +## Build and run the RD-V3-R1 dual-chip platform -The RD‑V3‑R1 platform is a dual-chip simulation environment built to model multi-die Arm server SoCs. It expands on the single-die RD‑V3 design by introducing a second application processor and a Management Control Processor (MCP). +The RD-V3-R1 platform is a dual-chip simulation environment built to model multi-die Arm server SoCs. It expands on the single-die RD-V3 design by introducing a second application processor and a Management Control Processor (MCP). -***Key Use Cases*** +### Key use cases -- Simulate chiplet-style boot flow with two APs -- Observe coordination between SCP and MCP across dies -- Test secure boot in a distributed firmware environment +- Simulating a chiplet-style boot flow with two APs +- Observing coordination between SCP and MCP across dies +- Testing secure boot in a distributed firmware environment -***Differences from RD‑V3*** -- Dual AP boot flow instead of single AP -- Adds MCP (Cortex‑M7) to support cross-die management +### Key differences from RD-V3 + +- Dual AP boot flow instead of a single AP +- MCP (Cortex-M7) to support cross-die management - More complex power/reset coordination -## Step 1: Clone the RD‑V3‑R1 Firmware Stack +## Step 1: Clone the RD-V3-R1 firmware stack -Initialize and sync the codebase for RD‑V3‑R1: +Initialize and sync the codebase for RD-V3-R1: ```bash cd ~ mkdir rdv3r1 cd rdv3r1 repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m pinned-rdv3r1.xml -b refs/tags/RD-INFRA-2025.07.03 --depth=1 -repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle +repo sync -c -j "$(nproc)" --fetch-submodules --force-sync --no-clone-bundle ``` -## Step 2: Install RD-V3-R1 FVP +## Step 2: Install the RD-V3-R1 FVP + +Refer to the [RD-V3-R1 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) to pick the FVP version that matches your tag, then download and install it: -Refer to the [RD-V3-R1 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) to determine which FVP model version matches your selected release tag. -Then download and install the corresponding FVP binary. -```bash mkdir -p ~/fvp cd ~/fvp wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastructure/RD-V3-r1/FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz @@ -48,11 +48,9 @@ tar -xvf FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz ## Step 3: Build the Firmware -Since you have already created the Docker image for firmware building in a previous section, there is no need to rebuild it for RD‑V3‑R1. +If you built the Docker image earlier, you can reuse it for RD-V3-R1. Run the full build and package flow: -Run the full firmware build and packaging process: -```bash cd ~/rdv3r1 docker run --rm \ -v "$PWD:$PWD" \ @@ -66,31 +64,27 @@ docker run --rm \ ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 package" ``` -## Step 4: Launch the Simulation - -Once connected via Remote Desktop, open a terminal and launch the RD‑V3‑R1 FVP simulation: +## Step 4: Launch the simulation +From a desktop session on the build host, start the RD-V3-R1 FVP: ```bash cd ~/rdv3r1/model-scripts/rdinfra -export MODEL=/home/ubuntu/FVP_RD_V3_R1/models/Linux64_armv8l_GCC-9.3/FVP_RD_V3_R1_R1 +export MODEL="$HOME/FVP_RD_V3_R1/models/Linux64_armv8l_GCC-9.3/FVP_RD_V3_R1" # adjust if your path/toolchain differs ./boot-buildroot.sh -p rdv3r1 & ``` -This command starts the dual-chip simulation. -You’ll observe additional UART consoles for components like the MCP, and you can verify that both application processors (AP0 and AP1) are brought up in a coordinated manner. - -![img5 alt-text#center](rdv3r1_sim_login.jpg "RDV3 R1 buildroot login") +This starts the dual-chip simulation. You’ll see additional UART consoles (for example, MCP) and can verify both application processors (AP0 and AP1) boot in a coordinated manner. -Similar to the previous session, the terminal logs are stored in `~/rdv3r1/model-scripts/rdinfra/platforms/rdv3r1/rdv3r1`. +![img5 alt-text#center](rdv3r1_sim_login.jpg "RD-V3-R1 Buildroot login") +As before, the terminal logs are stored under `~/rdv3r1/model-scripts/rdinfra/platforms/rdv3r1/rdv3r1`. -## Step 5: Customize Firmware and Confirm MCP Execution -To wrap up this learning path, let’s verify that your firmware changes can be compiled and simulated successfully within the RD‑V3‑R1 environment. +## Step 5: Customize firmware and confirm MCP execution -Edit the MCP source file `~/rdv3r1/host/scp/framework/src/fwk_module.c` +To validate a firmware change in the RD-V3-R1 environment, edit the MCP source file `~/rdv3r1/host/scp/framework/src/fwk_module.c` -Locate the function `fwk_module_start()`. Add the following logging line just before `return FWK_SUCCESS;`: +Locate the function `fwk_module_start()` and add the following logging line just before `return FWK_SUCCESS;`: ```c int fwk_module_start(void) @@ -120,13 +114,11 @@ docker run --rm \ ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 package" ``` -Launch the FVP simulation again and observe the UART output for MCP. +Launch the FVP simulation again and check the MCP UART output for MCP. ![img6 alt-text#center](rdv3r1_sim_codechange.jpg "RDV3 R1 modify firmware") -If the change was successful, your custom log line will appear in the MCP console—confirming that your code was integrated and executed as part of the firmware boot process. - -You’ve now successfully simulated a dual-chip Arm server platform using RD‑V3‑R1 on FVP—from cloning firmware sources to modifying secure control logic. +If the change was successful, your custom log line will appear in the MCP console - confirming that your code was integrated and executed as part of the firmware boot process. +You’ve now successfully simulated a dual-chip Arm server platform using RD‑V3‑R1 on FVP and validated a firmware change end-to-end—setting you up for deeper customization (for example, BMC integration) in future development cycles. -This foundation sets the stage for deeper exploration, such as customizing platform firmware or integrating BMC workflows in future development cycles. From 7d608bc91735280ead9d221ec66e974013124c02 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 10 Sep 2025 08:35:47 +0000 Subject: [PATCH 5/7] Further improvements --- .../1_introduction_rdv3.md | 2 +- .../neoverse-rdv3-swstack/2_rdv3_bootseq.md | 33 ++++++++++--------- .../neoverse-rdv3-swstack/3_rdv3_sw_build.md | 20 ++++++++--- .../neoverse-rdv3-swstack/4_rdv3_on_fvp.md | 24 +++++++------- .../neoverse-rdv3-swstack/5_rdv3_modify.md | 15 +++++---- .../neoverse-rdv3-swstack/_index.md | 14 ++++---- 6 files changed, 65 insertions(+), 43 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md index f4d47bd8d1..d97765cf77 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md @@ -64,7 +64,7 @@ Key capabilities of FVPs: FVPs enable developers to verify boot sequences, debug firmware handoffs, and even simulate RSE (Runtime Security Engine) behaviors, all pre-silicon. -## Comparing RD-V3 FVP variants +## Compare RD-V3 FVP variants To support different use cases and levels of platform complexity, Arm offers several virtual models based on the CSS-V3 architecture: RD-V3, RD-V3-R1, RD-V3-Cfg1 (CFG1), and RD-V3-Cfg2 (CFG2). While they share a common foundation, they differ in chip count, system topology, and simulation flexibility. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md index ccde5e9e5c..859af777ee 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md @@ -10,7 +10,7 @@ layout: learningpathall To ensure the platform transitions securely and reliably from power-on to operating system launch, this section introduces the roles and interactions of each firmware component within the RD-V3 boot process. You’ll learn how each component contributes to system initialization and how control is systematically handed off across the boot chain. -## How the system boots up +## Booting the system up In the RD-V3 platform, each firmware component such as TF-A, RSE, SCP, MCP, LCP, and UEFI operates independently but participates in a well-defined sequence. Each is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling. @@ -24,14 +24,14 @@ After BL2, the Runtime Security Engine (RSE, Cortex-M55) authenticates critical ***RSE acts as the platform’s security gatekeeper.*** -## Stage 2: Early hardware initialization (SCP / MCP) +## Stage 2: Early hardware initialization (SCP/MCP) Once RSE completes verification, the System Control Processor (SCP, Cortex-M7) and the Management Control Processor (MCP, where present) are released from reset. They perform essential bring-up: -* Initialize clocks, reset lines, and power domains -* Prepare DRAM and interconnect -* Enable the application processor (AP) cores and signal readiness to TF-A +* Initializing clocks, reset lines, and power domains +* Preparing DRAM and interconnect +* Enabling the application processor (AP) cores and signaling readiness to TF-A ***SCP/MCP are the ground crew bringing hardware systems online.*** @@ -45,10 +45,10 @@ When the AP is released, it begins executing Trusted Firmware-A (TF-A) at EL3 fr TF-A hands off control to UEFI firmware (EDK II), which performs device discovery and launches GRUB. -Responsibilities: -* Detect and initialize memory, PCIe, and boot devices -* Generate ACPI and platform configuration tables -* Locate and launch GRUB from storage or flash +Responsibilities here include: +* Detecting and initializing memory, PCIe, and boot devices +* Generating ACPI and platform configuration tables +* Locating and launching GRUB from storage or flash ***EDK II and GRUB are like the first- and second-stage rockets launching the payload.*** @@ -56,10 +56,10 @@ Responsibilities: GRUB loads the Linux kernel and passes full control to the OS. -Responsibilities: -* Initialize device drivers and kernel subsystems -* Mount the root filesystem -* Start user-space processes (for example, BusyBox) +Responsibilities include: +* Initializing device drivers and kernel subsystems +* Mounting the root filesystem +* Starting user-space processes (for example, BusyBox) ***The Linux kernel is the spacecraft - it takes over and begins its mission.*** @@ -127,9 +127,12 @@ For example, `RSE ← BL2` means BL2 loads/triggers RSE; `AP ← SCP + RSE` mean The right-facing arrows in `UEFI → GRUB → Linux` indicate a **direct execution path**—each stage passes control directly to the next. {{% /notice %}} -This layered approach supports modular testing, independent debugging, and early simulation—essential for secure and robust platform bring-up. +This layered approach supports modular testing, independent debugging, and early simulation, which is essential for secure and robust platform bring-up. + +## Summary + +In this section, you have: -**In this section, you have:** * Explored the full boot sequence of the RD-V3 platform, from power-on to Linux login * Understood the responsibilities of TF-A, RSE, SCP, MCP, LCP, and UEFI * Learned how secure boot is enforced and how each module hands off control diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md index 04f6b9056d..e87ed78c35 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md @@ -5,19 +5,22 @@ weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- + ## Building the RD-V3 Reference Platform Software Stack In this module, you’ll set up your development environment on any Arm-based server and build the firmware stack required to simulate the RD-V3 platform. This Learning Path was tested on an AWS `m7g.4xlarge` Arm-based instance running Ubuntu 22.04. -## Step 1: Prepare the Development Environment +## Step 1: Set up your development environment -First, ensure your system is up to date and install the required tools and libraries: +First, check that your system is current and install the required dependencies: ```bash sudo apt update sudo apt install -y curl git ``` -Configure git as follows: + +Configure git: + ``` git config --global user.name "" git config --global user.email "" @@ -25,7 +28,16 @@ git config --global user.email "" ## Step 2: Fetch the source code -The RD‑V3 platform firmware stack consists of many independent components, such as TF‑A, SCP, RSE, UEFI, Linux kernel, and Buildroot. Each component is maintained in a separate Git repository. To manage and synchronize these repositories efficiently, use the `repo` tool. It simplifies syncing the full platform software stack from multiple upstreams. +The RD‑V3 platform firmware stack consists of many independent components, such as: + +- TF‑A +- SCP +- RSE +- UEFI +- Linux kernel +- Buildroot. + +Each component is maintained in a separate Git repository. To manage and synchronize these repositories efficiently, use the `repo` tool. It simplifies syncing the full platform software stack from multiple upstreams. If `repo` is not installed, you can download it and add it to your `PATH`: diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md index 2ef28d520c..e5fe961b28 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md @@ -8,11 +8,9 @@ layout: learningpathall ## Simulating RD-V3 with an Arm FVP -In the previous section, you built the complete CSS-V3 firmware stack. -Now you’ll use an Arm Fixed Virtual Platform (FVP) to simulate the system, allowing you to verify the boot sequence without any physical silicon. -This simulation brings up the full stack from BL1 to a Linux shell using Buildroot. +In the previous section, you built the complete CSS-V3 firmware stack. Now you’ll use an Arm Fixed Virtual Platform (FVP) to simulate the system, allowing you to verify the boot sequence without any physical silicon. This simulation brings up the full stack from BL1 to a Linux shell using Buildroot. -## Step 1: Download and Install the FVP Model +## Step 1: Download and install the FVP model Each reference design release tag corresponds to a specific FVP model version. For example, the **RD-INFRA-2025.07.03** tag is designed to work with **FVP version 11.29.35**. @@ -29,15 +27,15 @@ tar -xvf FVP_RD_V3_11.29_35_Linux64_armv8l.tgz ./FVP_RD_V3.sh ``` -The FVP installation might prompt you with a few questions,choosing the defaults is sufficient for this Learning Path. By default, the FVP installs under `/home/ubuntu/FVP_RD_V3`. +The FVP installation might prompt you with a few questions, choose the default settings. By default, the FVP installs under `/home/ubuntu/FVP_RD_V3`. -## Step 2: remote desktop setup +## Step 2: set up remote desktop The RD‑V3 FVP model launches multiple UART consoles. Each console is mapped to a separate terminal window for different subsystems (for example, Neoverse V3, Cortex‑M55, Cortex‑M7, panel). If you’re accessing the platform over SSH, these UART consoles can still be displayed, but network latency and graphical forwarding can severely degrade performance. -To interact with different UARTs more efficiently, it is recommend to install a remote desktop environment using `XRDP`. This provides a smoother user experience when dealing with multiple terminal windows and system interactions. +To interact with different UARTs more efficiently, install a remote desktop environment using `XRDP`. This provides a smoother user experience when dealing with multiple terminal windows and system interactions. Install required packages and enable XRDP: @@ -49,6 +47,7 @@ sudo systemctl enable --now xrdp ``` To allow remote desktop connections, you need to open port 3389 (RDP) in your AWS EC2 security group: + - Go to the EC2 Dashboard → Security Groups - Select your instance’s group → **Inbound rules** → **Edit inbound rules** - Add a rule: Type: RDP, Port: 3389, Source: your public IP (recommended) @@ -63,9 +62,11 @@ For better security, limit the source to your current public IP instead of 0.0.0 ## Switch to Xorg (required on Ubuntu 22.04) Wayland is the default display server on Ubuntu 22.04, but it is not compatible with XRDP. -To enable XRDP remote sessions, you must switch to Xorg by modifying the GDM configuration. +To enable XRDP remote sessions, you must switch to Xorg by modifying the GDM configuration: + +Open the `/etc/gdm3/custom.conf` in a text editor. -Open the `/etc/gdm3/custom.conf` in a text editor. Find the line: +Find the line: ```output #WaylandEnable=false @@ -78,15 +79,16 @@ WaylandEnable=false ``` Restart the GDM display manager: + ```bash sudo systemctl restart gdm3 ``` -After restart, XRDP sessions will use Xorg and you can connect to it in the Arm server using Remote Desktop. +After restart, XRDP sessions will use Xorg and you can connect to it in the Arm server using a remote desktop. ## Step 3: launch the simulation -Once connected using Remote Desktop, open a terminal and launch the RD‑V3 FVP simulation: +Once connected using a remote desktop, open a terminal and launch the RD‑V3 FVP simulation: ```bash cd ~/rdv3/model-scripts/rdinfra diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md index e5abba844f..399e601032 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md @@ -6,17 +6,17 @@ weight: 6 layout: learningpathall --- -## Build and run the RD-V3-R1 dual-chip platform +## The RD-V3-R1 dual-chip platform The RD-V3-R1 platform is a dual-chip simulation environment built to model multi-die Arm server SoCs. It expands on the single-die RD-V3 design by introducing a second application processor and a Management Control Processor (MCP). -### Key use cases +Key use cases of RD-V3-R! are: - Simulating a chiplet-style boot flow with two APs - Observing coordination between SCP and MCP across dies - Testing secure boot in a distributed firmware environment -### Key differences from RD-V3 +Key differences from RD-V3 are: - Dual AP boot flow instead of a single AP - MCP (Cortex-M7) to support cross-die management @@ -38,7 +38,7 @@ repo sync -c -j "$(nproc)" --fetch-submodules --force-sync --no-clone-bundle Refer to the [RD-V3-R1 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) to pick the FVP version that matches your tag, then download and install it: - +```bash mkdir -p ~/fvp cd ~/fvp wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastructure/RD-V3-r1/FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz @@ -46,11 +46,13 @@ tar -xvf FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz ./FVP_RD_V3_R1.sh ``` -## Step 3: Build the Firmware +## Step 3: Build the firmware -If you built the Docker image earlier, you can reuse it for RD-V3-R1. Run the full build and package flow: +If you built the Docker image earlier, you can reuse it for RD-V3-R1. +Run the full build and package flow: +```bash cd ~/rdv3r1 docker run --rm \ -v "$PWD:$PWD" \ @@ -67,6 +69,7 @@ docker run --rm \ ## Step 4: Launch the simulation From a desktop session on the build host, start the RD-V3-R1 FVP: + ```bash cd ~/rdv3r1/model-scripts/rdinfra export MODEL="$HOME/FVP_RD_V3_R1/models/Linux64_armv8l_GCC-9.3/FVP_RD_V3_R1" # adjust if your path/toolchain differs diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md index 13dd130065..e97cc2d487 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md @@ -3,14 +3,16 @@ title: CSS-V3 Pre-Silicon Software Development Using Neoverse Servers minutes_to_complete: 90 -who_is_this_for: This Learning Path is for firmware developers, system architects, and silicon validation engineers building Arm Neoverse CSS platforms. It focuses on pre-silicon development for the CSS-V3 reference design using Fixed Virtual Platforms (FVPs). You’ll build, customize, and validate firmware on the RD-V3 platform before hardware is available. +who_is_this_for: This advanced topic is for firmware developers, system architects, and silicon validation engineers working on Arm Neoverse CSS platforms who require a pre-silicon workflow for the CSS-V3 reference design using Fixed Virtual Platforms (FVPs). learning_objectives: - - Understand the architecture of Arm Neoverse CSS-V3 as the foundation for scalable server-class platforms - - Build and boot the RD-V3 firmware stack using TF-A, SCP, RSE, and UEFI - - Simulate multi-core, multi-chip systems with Arm FVP models and interpret boot logs - - Modify platform control firmware to test custom logic and validate via pre-silicon simulation - + - Explain the CSS-V3 architecture and the RD-V3 firmware boot sequence (TF-A, RSE, SCP/MCP/LCP, UEFI/GRUB, Linux) + - Set up a containerized build environment and sync sources with a pinned manifest using repo + - Build and boot the RD-V3 firmware stack on FVP and map UART consoles to components + - Interpret boot logs to verify bring-up and diagnose boot-stage issues + - Modify platform control firmware (for example, SCP/MCP) and validate changes via pre-silicon simulation + - Launch a dual-chip RD-V3-R1 simulation and verify AP/MCP coordination + prerequisites: - Access to an Arm Neoverse-based Linux machine (cloud or local) with at least 80 GB of free storage - Familiarity with Linux command-line tools and basic scripting From 02a8d1f4aaacef515e91b3b4fc4cc5717432b56e Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 10 Sep 2025 09:15:44 +0000 Subject: [PATCH 6/7] Tweaks --- .../neoverse-rdv3-swstack/2_rdv3_bootseq.md | 8 ++++---- .../neoverse-rdv3-swstack/3_rdv3_sw_build.md | 16 +++++++++------- .../neoverse-rdv3-swstack/5_rdv3_modify.md | 2 +- .../neoverse-rdv3-swstack/_index.md | 2 +- 4 files changed, 15 insertions(+), 13 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md index 859af777ee..63f307520f 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md @@ -84,7 +84,7 @@ RSE acts as the second layer of the chain of trust, maintaining a monitored and * Manages DRAM setup and enables power for the AP * Coordinates boot readiness with RSE via the Message Handling Unit (MHU) -## TF-A: Trusted Firmware-A (BL1/BL2) (Stage 3) +### TF-A: Trusted Firmware-A (BL1/BL2) (Stage 3) * **BL1** executes from ROM, initializes minimal hardware (clocks, UART), and loads BL2 * **BL2** validates and loads SCP, RSE, and UEFI images, setting up secure handover to later stages @@ -112,12 +112,12 @@ LCP support depends on the FVP model and can be omitted in simplified setups. The RD-V3 boot sequence follows a multi-stage, dependency-driven handshake model, where each firmware module validates, powers, or authorizes the next. -| Stage | Dependency chain | Description | +| Stage(s) | Dependency chain | Description | |------:|----------------------|-------------------------------------------------------------------------------| | 1 | RSE ← BL2 | RSE is loaded and triggered by BL2 to begin security validation | | 2 | SCP ← BL2 + RSE | SCP initialization requires BL2 and authorization from RSE | | 3 | AP ← SCP + RSE | The AP starts only after SCP sets power and RSE permits | -| 4 | UEFI → GRUB → Linux | UEFI launches GRUB, which loads the kernel and enters the OS | +| 4-5 | UEFI → GRUB → Linux | UEFI launches GRUB, which loads the kernel and enters the OS | This handshake ensures no stage proceeds unless its dependencies have securely initialized and authorized the next step. @@ -134,7 +134,7 @@ This layered approach supports modular testing, independent debugging, and early In this section, you have: * Explored the full boot sequence of the RD-V3 platform, from power-on to Linux login -* Understood the responsibilities of TF-A, RSE, SCP, MCP, LCP, and UEFI +* Learned about the responsibilities of TF-A, RSE, SCP, MCP, LCP, and UEFI * Learned how secure boot is enforced and how each module hands off control * Interpreted boot dependencies using FVP simulation and UART logs diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md index e87ed78c35..542b6f1ae0 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md @@ -28,16 +28,18 @@ git config --global user.email "" ## Step 2: Fetch the source code -The RD‑V3 platform firmware stack consists of many independent components, such as: +The RD‑V3 platform firmware stack consists of multiple components, most maintained in separate Git respositories, such as: - TF‑A -- SCP -- RSE -- UEFI +- SCP/MCP +- RSE (TF-M) +- UEFI (EDK II) - Linux kernel -- Buildroot. +- Buildroot +- kvmtool (lkvm) +- RMM (optional) -Each component is maintained in a separate Git repository. To manage and synchronize these repositories efficiently, use the `repo` tool. It simplifies syncing the full platform software stack from multiple upstreams. +Use the repo tool with the RD-V3 manifest to sync these sources from multiple upstreams consistently (typically to a pinned release tag). It simplifies syncing the full platform software stack from multiple upstreams. If `repo` is not installed, you can download it and add it to your `PATH`: @@ -182,7 +184,7 @@ your-username:hostname:/home/your-username/rdv3$ You can explore the container environment if you wish, then type `exit` to return to the host. -## Step 4: Build firmware +## Step 4: Build firmware Building the full firmware stack involves compiling several components and packaging them for simulation. The following command runs build and then package inside the Docker image: diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md index 399e601032..cce9ab4d05 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md @@ -10,7 +10,7 @@ layout: learningpathall The RD-V3-R1 platform is a dual-chip simulation environment built to model multi-die Arm server SoCs. It expands on the single-die RD-V3 design by introducing a second application processor and a Management Control Processor (MCP). -Key use cases of RD-V3-R! are: +Key use cases of RD-V3-R1 are: - Simulating a chiplet-style boot flow with two APs - Observing coordination between SCP and MCP across dies diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md index e97cc2d487..b2974b3c3b 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md @@ -1,5 +1,5 @@ --- -title: CSS-V3 Pre-Silicon Software Development Using Neoverse Servers +title: Develop and Validate Firmware Pre-Silicon on Arm Neoverse CSS V3 minutes_to_complete: 90 From 14b0bd9111fcbac31d5e13c820ea04e914a9ab78 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Wed, 10 Sep 2025 09:05:21 -0400 Subject: [PATCH 7/7] Update 3_rdv3_sw_build.md --- .../neoverse-rdv3-swstack/3_rdv3_sw_build.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md index 542b6f1ae0..c68e91c332 100644 --- a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md @@ -19,9 +19,9 @@ sudo apt update sudo apt install -y curl git ``` -Configure git: +Configure git(optional): -``` +```bash git config --global user.name "" git config --global user.email "" ```