diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/1-install-plugin.md b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/1-install-plugin.md new file mode 100644 index 0000000000..e2b10102fb --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/1-install-plugin.md @@ -0,0 +1,45 @@ +--- +title: Introduction to neural graphics and Neural Super Sampling (NSS) +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## What is the Neural Graphics Development Kit? + +The Neural Graphics Development Kit empowers game developers to build immersive mobile gaming experiences using a neural accelerator for post-processing effects like upscaling. By combining Unreal Engine and the ML extensions for Vulkan, these tools allow you to integrate and evaluate AI-based upscaling technologies like Neural Super Sampling (NSS). This Learning Path walks you through the setup and execution of NSS for Unreal Engine. + +## What is Neural Super Sampling? + +NSS is an upscaling technology from Arm, purpose-built for real-time performance and power efficiency on mobile and embedded platforms. + +It uses a compact neural network to: +- Upscale low-resolution frames into high-resolution visuals +- Incorporate temporal data such as motion vectors, depth, and feedback +- Reduce bandwidth usage and GPU load + +Powered by the ML extensions for Vulkan, this new technology delivers smooth, crisp image quality, optimized for **mobile-class hardware** with a **Neural Accelerator** (NX). You’ll be able to render frames at a lower resolution and then upscale them using the technology, which helps you achieve higher frame rates without compromising the visual experience. This is especially useful on mobile, handheld, or thermally limited platforms, where battery life and thermal headroom are critical. It can also deliver improved image quality compared to other upsampling techniques, like spatio-temporal implementations. + +Under the hood, Neural Super Sampling for Unreal Engine (NSS for UE) runs its neural inference through Vulkan using **ML extensions for Vulkan**, which bring machine learning workloads into the graphics pipeline. The Development Kit includes **emulation layers** that simulate the behavior of the extensions on Vulkan compute capable GPUs. These layers allow you to test and iterate without requiring access to NX hardware. + +## Neural Upscaling in Unreal Engine + +With these resources, you can seamlessly integrate NSS into any Unreal Engine project. The setup is designed to work with Vulkan as your rendering backend, and you don’t need to overhaul your workflow - just plug it in and start leveraging ML-powered upscaling right away. The technology is available as a source-code implementation that you will build with Visual Studio. + +## Download required artifacts + +Before you begin, download the required plugins and dependencies. These two repositories contain everything you need to set up NSS for Unreal Engine, including the VGF model file, and the ML Emulations Layers for Vulkan. + +### 1. Download the NSS plugin + +[**Neural Super Sampling Unreal Engine Plugin** → GitHub Repository](https://github.com/arm/neural-graphics-for-unreal) + +Download the latest release package and extract it on your Windows machine. + +### 2. Download the runtime for ML Extensions for Vulkan +[**Unreal NNE Runtime RDG for ML Extensions for Vulkan** → GitHub Repository](https://github.com/arm/ml-extensions-for-vulkan-unreal-plugin). + +Download and extract the release package on your Windows machine. + +Once you’ve extracted both repositories, proceed to the next section to set up your development environment and enable the NSS plugin. \ No newline at end of file diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/2-emulation-layer.md b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/2-emulation-layer.md new file mode 100644 index 0000000000..ff6ec801ab --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/2-emulation-layer.md @@ -0,0 +1,81 @@ +--- +title: Setting up the emulation layers +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Install dependencies + +To run NSS in your Unreal Engine project, install and configure the following: + +- **Vulkan SDK**: Required for development of applications that use Vulkan, and to enable the Vulkan Configurator. The latter sets up the emulation layers used for running ML extensions for Vulkan workloads. +- **ML Emulation Layer for Vulkan**: These layers allows neural inference to run in emulation through Vulkan’s compute backend. They are activated by Vulkan Configurator to run with the Unreal Engine plugin. The layers are included in the `NNERuntimeRDGMLExtensionsForVulkan` zip you downloaded in a previous step. The Vulkan layer configuration activates the ML Emulation Layer for Vulkan, which implements the ML extensions for Vulkan. +- **NSS for Unreal Engine plugins**: These include `NSS` (the inference and model interface) and `NNERuntimeRDGMLExtensionsForVulkan` (which connects Unreal’s Render Dependency Graph to the ML extensions for Vulkan). + +These components allow you to run NSS in Unreal Engine, using ML emulation layers for Vulkan for development and testing. + +## Install Vulkan Software Development Kit + +Go to the [Vulkan SDK landing page](https://vulkan.lunarg.com/sdk/home) and download the SDK Installer for Windows. After you have run the installer, you can move on to the next step. + +## Configure Vulkan Layers + +Vulkan Configurator is a program that will run the emulation layers in the background when you want to utilize them with Unreal Engine. + +To emulate the ML extensions for Vulkan: +1. Launch the **Vulkan Configurator** (bundled with the Vulkan SDK) from the Windows **Start** menu. +2. In the **Apply a Vulkan Loader Configuration** list, right-click and choose **Create a new Configuration**. You can give the new configuration any name, for example `NSS`. +3. Navigate to the **Vulkan Layers Location** tab. +4. Append a user-defined path pointing to the emulation layers you downloaded in the previous section: + ``` + /NNERuntimeRDGMLExtensionsForVulkan/MLEmulationLayerForVulkan + ``` +![Add user-defined Vulkan layers path in Vulkan Configurator#center](./images/load_layers.png "Figure 1: Add Vulkan layer path.") + +5. Ensure the Graph layer is listed *above* the Tensor layer, and that you've set up the configuration scope as shown in the image. + +![Layer configuration showing Graph above Tensor#center](./images/verify_layers.png "Figure 2: Verify layer ordering and scope.") + + +{{% notice %}} +Keep the Vulkan Configurator running to enable the emulation layers during engine execution. +{{% /notice %}} + +## Enable NSS for Unreal Engine + +1. Open Unreal Engine and create a new **Third Person** template project using the **C++** option. + +![Unreal Engine project selection screen showing C++ Third Person template#center](./images/unreal_startup.png "Figure 3: Create a new C++ project in Unreal Engine.") + +2. Open the project in **Visual Studio**. Build it from source through **Build** > **Build Solution** or with `Ctrl+Shift+B`. + +After the build is finished, open your project in Unreal Engine. + +## Change Unreal’s Rendering Interface to Vulkan + +By default, Unreal uses DirectX. Instead, you need to choose Vulkan as the default RHI: +1. Go to: + ``` + Project Settings > Platform > Windows > Targeted RHIs > Default RHI + ``` +2. Select **Vulkan**. +3. Restart Unreal Engine to apply the change. + +![Project Settings with Vulkan selected as Default RHI under Targeted RHIs#center](./images/targeted_rhis.png "Figure 4: Set Vulkan as the default RHI.") + + +## Add and enable the plugins + +1. Open your project directory in Windows explorer, and create a new folder called `Plugins`. +2. Copy the downloaded and extracted `.zips` into the new directory: + - `NNERuntimeRDGMLExtensionsForVulkan` + - `NSS` +3. Re-open Unreal Engine. When prompted, confirm plugin integration. +4. Rebuild your project in Visual Studio from source. +5. Verify the installation by opening the Plugins view in Unreal Engine, and making sure the checkbox is selected for both `NSS` and `NNERuntimeRDGMLExtensionsForVulkan` as shown. Restart Unreal Engine if prompted. + +![Unreal Engine plugins window showing NSS and NNERuntimeRDGMLExtensionsForVulkan enabled#center](./images/verify_plugin_enabled.png "Figure 5: Verify plugin installation in Unreal Engine.") + +With the emulation layers and plugins configured, you're ready to run Neural Super Sampling in Unreal Engine. Continue to the next section to test the integration. \ No newline at end of file diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/3-run-example.md b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/3-run-example.md new file mode 100644 index 0000000000..8aa9c90273 --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/3-run-example.md @@ -0,0 +1,42 @@ +--- +title: Run the example +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Start the level and verify NSS + +Press the green **Play** button to start the level. To verify NSS is running, you can run this command in Unreal: + ``` + ShowFlag.VisualizeTemporalUpscaler 1 + ``` +You’ll see **NSS** listed in the rendering summary. + +{{% notice %}} +In **Project Settings > Plugins > Neural Super Sampling**, you can view and configure the active neural network model being used. +{{% /notice %}} + +Run `ShowFlag.VisualizeTemporalUpscaler 0` to disable the overview. To visualize the NSS model output in real-time, run the following command: + ``` + r.NSS.Debug 2 + ``` + +This will add real-time views showing the model’s processed outputs, such as predicted filter coefficients and feedback, as below. In the [Wrapping up section](./6-wrapping-up.md), you will find links to learn more about what the debug outputs mean. + +![Debug view of Neural Super Sampling model output in Unreal Engine#center](./images/nss_debug.png "Figure 6: Visualize NSS model debug output in real time.") + +## NSS model on Hugging Face + +The model that powers NSS is published on Hugging Face in the [VGF format](https://github.com/arm/ai-ml-sdk-vgf-library). This format is optimized for inference via ML extensions for Vulkan. + +Visit the [NSS model page on Hugging Face](https://huggingface.co/Arm/neural-super-sampling/) + +On this landing page, you can read more about the model, and learn how to run a test case - a _scenario_ - using the ML SDK for Vulkan. + +## Result + +You now have Neural Super Sampling integrated and running inside Unreal Engine. This setup provides a real-time testbed for neural upscaling. + +Proceed to the next section to debug your frames using RenderDoc, or move on to the final section to explore more resources on the technology behind NSS. diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/5-renderdoc.md b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/5-renderdoc.md new file mode 100644 index 0000000000..455c98763c --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/5-renderdoc.md @@ -0,0 +1,70 @@ +--- +title: Using RenderDoc for Debugging and Analysis +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Why use RenderDoc with Neural Super Sampling? + +As you integrate neural upscaling techniques into your game, visual debugging and performance profiling become essential. RenderDoc is a powerful frame capture and analysis tool that allows you to step through a frame, inspect Vulkan API calls, view shader inputs and outputs, and understand the state of resources. Arm has released some additional features, which are captured in [RenderDoc for Arm GPUs](https://developer.arm.com/Tools%20and%20Software/RenderDoc%20for%20Arm%20GPUs). + +You might want to use RenderDoc when: + +- You see unexpected visual output or want to step through the frame rendering process. +- You need to analyze the sequence of Vulkan API calls made by the engine. +- You’re inspecting memory usage or the state of specific GPU resources. +- You want to validate your data graph pipeline execution or identify synchronization issues. + +## Install Arm Performance Studio + +To access RenderDoc for Arm GPUs containing the added features with ML extensions for Vulkan, you should install Arm Performance Studio. Download it from the [Arm Performance Studio Downloads](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio#Downloads). The minimum version to use is `2025.4` + +Refer to the [Arm Performance Studio install guide](./install-guides/ams) to set it up. + +Upon a finished installation, you can find the installed version of RenderDoc for Arm GPUs using the Windows **Start** menu. + +## Use in Unreal Engine + +### 1. Configure the executable path + +To enable integration with Unreal Engine: + +1. Open your Unreal Engine project. +2. Go to **Edit > Project Settings > Plugins > RenderDoc**. +3. Under **Path to RenderDoc executable**, enter the full path to the directory where the `qrenderdoc.exe` binary is located. +4. Restart Unreal Engine for the setting to take effect. + +![RenderDoc plugin path setup in Unreal Engine#center](./images/renderdoc_plugin_ue.png "Figure 7: Set the RenderDoc executable path in Unreal Engine plugin settings.") + + +## 2. Ways to capture + +### Option 1: Attach to the Running Editor + +1. Launch RenderDoc for Arm GPUs separately. +2. Go to **File > Attach to Running Instance**. +3. A list of running Vulkan-enabled applications will appear. Select the hostname that corresponds to the UE Editor session (with UI) or use Standalone Running App (see image below). +4. Click **Connect to App**. +5. Click **Capture Frame Immediately** or set up the capture settings otherwise. + +### Option 2: Use plugin inside Unreal Engine +1. Open your project and scene where you want to perform a capture. +2. Click the **RenderDoc Capture** button in the Level Viewport (see image below). + +![RenderDoc capture button in Unreal Engine Level Viewport, or Attach to Running Instance #center](./images/renderdoc.png "Figure 8: Two options to capture frames using RenderDoc with Unreal Engine.") + +## 3. Capture a Frame + +1. Return to Unreal Engine and **Play in Editor** to launch your game level. +2. In RenderDoc for Arm GPUs, click **Capture Frame Now** (camera icon) or press `F12` while the UE window is focused. +3. Once captured, double-click the frame in RenderDoc to open a detailed breakdown of the GPU workload. + +You can now: + +- Step through draw calls and dispatches. +- Inspect bound resources, descriptor sets, and shaders. +- Explore the execution of your data graph pipeline frame-by-frame. + +If you want to learn more about RenderDoc for Arm GPUs, you can check out the [Debug With RenderDoc User Guide](https://developer.arm.com/documentation/109669/latest). \ No newline at end of file diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/6-wrapping-up.md b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/6-wrapping-up.md new file mode 100644 index 0000000000..4aec40d945 --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/6-wrapping-up.md @@ -0,0 +1,25 @@ +--- +title: Wrapping up +weight: 7 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +With the NSS for UE plugin, you’re set up to explore real-time neural graphics with Neural Super Sampling. This toolchain gives you direct access to state-of-the-art upscaling powered by machine learning. + +You’ve covered: +- Understanding the role of **ML Extensions for Vulkan** and how emulation layers let you run everything without needing dedicated ML hardware +- Installing the **Vulkan SDK** and enabling ML Emulation Layer for Vulkan using Vulkan Configurator +- Setting up the **NSS for Unreal Engine** plugins, and visualizing the model output +- Inspecting the **NSS model** in VGF on Hugging Face + +This ecosystem is built for developers who want to push boundaries - whether on flagship mobile SoCs or desktop dev kits. NSS is designed to give you better image quality without the complexity of building custom ML infrastructure. + +To learn more about the different aspects in this Learning Path, check out the following resources: +- [Neural Graphics Development Kit landing page](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics-for-mobile) +- [NSS Use Case Guide](https://developer.arm.com/documentation/111009/latest/) +- [Debugging NSS content with RenderDoc](https://developer.arm.com/documentation/109669/latest) +- [Learning Path: Get started with neural graphics using ML Extensions for Vulkan](/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample) + +Happy building - and welcome to the future of neural upscaling in Unreal! diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/_index.md b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/_index.md new file mode 100644 index 0000000000..9669ac579e --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/_index.md @@ -0,0 +1,63 @@ +--- +title: Neural Super Sampling in Unreal Engine + +minutes_to_complete: 30 + +who_is_this_for: This is an introductory topic for developers experimenting with neural graphics using Unreal Engine® and ML Extensions for Vulkan®. + + +learning_objectives: + - Understand how Arm enables neural graphics for game development + - Configure ML extensions for Vulkan emulation + - Enable Neural Super Sampling (NSS) in Unreal Engine + - Run and visualize real-time upscaling with NSS + + +prerequisites: + - Windows 11 + - Unreal Engine 5.5 (Templates and Feature Pack enabled) + - Visual Studio 2022 (with Desktop Development with C++ and .NET desktop build tools) + + +author: Annie Tallund + +### Tags +skilllevels: Introductory +subjects: ML +armips: + - Mali +tools_software_languages: + - Unreal Engine + - Vulkan SDK + - Visual Studio +operatingsystems: + - Windows + + + +further_reading: + - resource: + title: Neural Graphics Development Kit + link: https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics-for-mobile + type: website + - resource: + title: NSS Use Case Guide + link: https://developer.arm.com/documentation/111009/latest/ + type: documentation + - resource: + title: RenderDoc for Arm GPUs + link: https://developer.arm.com/Tools%20and%20Software/RenderDoc%20for%20Arm%20GPUs + type: documentation + - resource: + title: How Arm Neural Super Sampling works + link: https://community.arm.com/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-arm-neural-super-sampling-works + type: blog + + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/_next-steps.md b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/add_plugin_folder.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/add_plugin_folder.png new file mode 100644 index 0000000000..3bacc9afc5 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/add_plugin_folder.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/confirm_layers.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/confirm_layers.png new file mode 100644 index 0000000000..86ed7de624 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/confirm_layers.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/load_layers.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/load_layers.png new file mode 100644 index 0000000000..84c51856a9 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/load_layers.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/nss_debug.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/nss_debug.png new file mode 100644 index 0000000000..135a3ed33e Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/nss_debug.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/renderdoc.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/renderdoc.png new file mode 100644 index 0000000000..8ddf9a322d Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/renderdoc.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/renderdoc_plugin_ue.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/renderdoc_plugin_ue.png new file mode 100644 index 0000000000..ec407c7646 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/renderdoc_plugin_ue.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/targeted_rhis.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/targeted_rhis.png new file mode 100644 index 0000000000..16d662ed96 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/targeted_rhis.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/unreal_startup.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/unreal_startup.png new file mode 100644 index 0000000000..6d2acfdce9 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/unreal_startup.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/verify_layers.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/verify_layers.png new file mode 100644 index 0000000000..51e7a45b62 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/verify_layers.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/verify_plugin_enabled.png b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/verify_plugin_enabled.png new file mode 100644 index 0000000000..96d8e1e267 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/nss-unreal/images/verify_plugin_enabled.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/1-introduction.md b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/1-introduction.md new file mode 100644 index 0000000000..133a0df124 --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/1-introduction.md @@ -0,0 +1,41 @@ +--- +title: Run neural graphics workloads with ML Extensions for Vulkan +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## What is neural graphics, and why does it matter for real-time rendering? + +Neural graphics combines real-time rendering with the power of machine learning to enhance visual quality and performance. By integrating ML techniques like neural upscaling directly into the GPU pipeline, developers can achieve next-gen fidelity and efficiency. This is especially valuable on mobile and embedded devices, where power efficiency is critical. + +## How do ML Extensions for Vulkan support neural graphics workloads? + +Vulkan's data graph pipelines, introduced through the `VK_ARM_data_graph` and `VK_ARM_tensors` extensions, bring structured compute graph execution to the Vulkan API by introducing support for processing tensors. These pipelines are designed to execute ML inference workloads efficiently using SPIR-V-defined graphs. + +To help developers adopt these features, the Tensor and Data Graph Vulkan Samples offer hands-on demonstrations. + +These samples address key challenges in ML integration, such as: + +- **Understanding graph-based compute with Vulkan**: See how compute workloads can be structured using explicit graph topologies. +- **Demystifying ML inference in real-time rendering**: Learn how ML fits into the graphics pipeline. + +Samples range from basic setups to more advanced features like tensor aliasing and compute shader integration. + +This Learning Path walks you through setting up and running the first sample, `simple_tensor_and_data_graph`. + + +### Why use ML Extensions for Vulkan for game and graphics development? + +As a game developer, you've probably noticed the rising demand for smarter, more immersive graphics — but also the increasing strain on GPU resources, especially on mobile. Vulkan's traditional pipelines give you fine-grained control, but finding the right tooling to integrate machine learning has been a challenge. That’s where the new ML extensions for Vulkan come in. + +Arm’s `VK_ARM_tensors` and `VK_ARM_data_graph` extensions give you native Vulkan support for executing neural networks on the GPU — using structured tensors and data graph pipelines. Instead of chaining compute shaders to simulate ML models, you can now express them as dataflow graphs in SPIR-V and run them more efficiently. This opens the door to using AI techniques right alongside the graphics pipeline. + +And while ML has found success in image classification and LLMs, these extensions are designed from the ground up for gaming and graphics workloads — prioritizing predictable execution, GPU compatibility, and memory efficiency. With built-in support for tensor formats and pipeline sessions, the extensions are optimized for developers looking to blend traditional rendering with machine learning on Vulkan. + +Arm provides emulation layers for development on any modern Vulkan-capable hardware, and PyTorch support is available for model conversion workflows. + +For an example of real-time upscaling, see the Learning Path [**Neural Super Sampling with Unreal Engine**](/learning-paths/mobile-graphics-and-gaming/nss-unreal/). + +With the Vulkan Samples, you can experiment directly with these ideas. Move on to the next section to set up your machine for running the samples. diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/2-ml-ext-for-vulkan.md b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/2-ml-ext-for-vulkan.md new file mode 100644 index 0000000000..ac2f4cd193 --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/2-ml-ext-for-vulkan.md @@ -0,0 +1,71 @@ +--- +title: Setting up the ML Emulation Layers for Vulkan +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview + +To run the Vulkan Samples, you first need to set up your development environment. + +This setup involves two main steps: + +* Install the required tools on your development machine +* Download the ML emulation layers for Vulkan, which simulate the `VK_ARM_data_graph` and `VK_ARM_tensors` extensions + +## Install required tools for development + +Before building and running the samples, ensure the following tools are installed on your development machine: + +- CMake (version 3.12 or later) +- Python 3 +- Git + +To verify your installation, run the following commands: + +```bash +cmake --version +python3 --version +git --version +``` + +Each command should print the installed version of the tool. + +### Install Vulkan Software Development Kit + +Go to the [Getting Started with the Windows Vulkan SDK](https://vulkan.lunarg.com/sdk/home) and download the SDK Installer for Windows. This installs **Vulkan Configurator** which is used to run the emulation layers. + +{{% notice Note %}} +You must use a version >= 1.4.321 for the Vulkan SDK. +{{% /notice %}} + +## Download the emulation layers + +For this Learning Path, a pre-built of package of the emulation layers is available. Download them by clicking the link. + +[ML Emulation Layer for Vulkan](https://www.arm.com/-/media/Files/developer/MLEmulationLayerForVulkan) + +Extract the downloaded file in a location of your choice. You’re now ready to enable the emulation layers in Vulkan Configurator. + +## Enable the emulation layers in Vulkan Configurator + +Next, enable the emulation layers using the Vulkan Configurator to simulate the `VK_ARM_data_graph` and `VK_ARM_tensors` extensions. Open **Vulkan Configurator**. + +Under the **Vulkan Layers Location** tab, add the path to your `MLEmulationLayerForVulkan` folder. + +In the **Apply a Vulkan Loader Configuration** list, right-click and choose **Create a new Configuration**. You can give the new configuration any name, for example `tensor_and_data_graph`. + +![Screenshot of the Vulkan Configurator showing the Vulkan Layers Location tab, where the emulation layer path (MLEmulationLayerForVulkan) is added to enable VK_ARM_data_graph and VK_ARM_tensors alt-text#center](./images/load_layers.png "Add emulation layers in Vulkan Configurator") + +Ensure that the **Graph** layer is listed above the **Tensor** layer. + +![Screenshot showing the Graph layer listed above the Tensor layer in the Vulkan Configurator. alt-text#center](./images/verify_layers.png "Reorder layers in Vulkan Configurator") + +{{% notice Important %}} +Keep Vulkan Configurator running while you run the Vulkan samples. +{{% /notice %}} + +With the emulation layers configured, you're ready to build the Vulkan Samples. Continue to the next section to get started. + diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/3-first-sample.md b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/3-first-sample.md new file mode 100644 index 0000000000..697430837c --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/3-first-sample.md @@ -0,0 +1,68 @@ +--- +title: Simple Tensor and Data Graph +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Understand how the Simple Tensor and Data Graph sample works + +The **Simple Tensor and Data Graph** sample is your starting point for working with the ML extensions for Vulkan. It demonstrates how to execute a simple neural network using a data graph pipeline — specifically, a 2D average pooling operation. + +## Clone the Vulkan Samples + +With the environment set up, you can now grab the sample code. These examples are maintained in a fork of the Khronos Group's repository. + +```bash +git clone --recurse-submodules https://github.com/ARM-software/Vulkan-Samples --branch tensor_and_data_graph +cd Vulkan-Samples +``` + +This repository includes the framework and samples showcasing the ML extensions for Vulkan. + +## Build the Vulkan Samples + +You're now ready to compile the project. From the root of the repository: + +{{% notice Note %}} +Be sure to run the commands in [Developer Mode](https://learn.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development#activate-developer-mode) to avoid permission issues. +{{% /notice %}} + +Generate Visual Studio project files using CMake: +```bash +cmake -G "Visual Studio 17 2022" -A x64 -S . -Bbuild/windows +``` +Finally, compile the `vulkan_samples` target in Release mode: + +```bash +cmake --build build/windows --config Release --target vulkan_samples +``` +## Run the Simple Tensor and Data Graph sample + +Run the built executable using the following command: + +```bash +build\windows\app\bin\Release\AMD64\vulkan_samples.exe sample simple_tensor_and_data_graph +``` + +This should open a new window visualizing the operation. In this sample, a minimal Vulkan application sets up a data graph pipeline configured to process a small neural network. + +The sample creates input and output tensors, binds them using descriptor sets and pipeline layouts, and supplies a SPIR-V module that defines the network operation. Finally, it records and dispatches commands to execute the pipeline — and visualizes the results in real time. More details about what's going on under the hood can be found in the [documentation](https://arm-software.github.io/Vulkan-Samples/samples/extensions/tensor_and_data_graph/simple_tensor_and_data_graph/README.html). + +## Summary and next steps + +By running this sample, you’ve stepped through a complete Vulkan data graph pipeline powered by the ML extensions for Vulkan. You’ve created tensors, set up descriptors, built a SPIR-V-encoded ML graph, and dispatched inference — all without needing custom shaders. This sets the foundation for neural graphics. As you explore the remaining samples, you’ll see how this core pattern extends into real-world graphics scenarios. + +As a next step, you can explore the remaining samples for the data graph pipeline. The documentation sits in each sample's directory, available under `samples/extensions/tensor_and_data_graph/` in the repository. + +## Overview of Additional Samples + +| Sample Name | Description | Focus Area | +|-------------------------------------|----------------------------------------------------------------------------------------------|------------------------------------------| +| **Graph Constants** | Shows how to include constants like weights and biases into the data graph pipeline using tensors| Constant tensor injection | +| **Compute Shaders with Tensors** | Demonstrates how to feed tensor data into or out of data graph pipelines using compute shaders | Shader interoperability | +| **Tensor Image Aliasing** | Demonstrates tensor aliasing with Vulkan images to enable zero-copy workflows | Memory-efficient data sharing | +| **Postprocessing with VGF** | Explores using VGF format, which contains SPIR-V, input, output and constant data used to run a data graph pipeline. | Neural network model | + +Next, you'll review additional tools to help you work with ML extensions for Vulkan in your own development environment. \ No newline at end of file diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/4-scenario-runner.md b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/4-scenario-runner.md new file mode 100644 index 0000000000..f147f290cf --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/4-scenario-runner.md @@ -0,0 +1,35 @@ +--- +title: Running a test with the Scenario Runner +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview + +In this section, you’ll explore how to run a complete inference test using the **Scenario Runner** from Arm’s ML SDK for Vulkan. You’ll also learn what’s provided on Arm’s Hugging Face page, including downloadable binaries and assets that demonstrate the ML extensions for Vulkan in action. + +## About the ML SDK for Vulkan + +The SDK provides a collection of tools and runtime components that help you integrate neural networks into Vulkan-based applications. While the ML extensions for Vulkan (`VK_ARM_data_graph` and `VK_ARM_tensors`) define the runtime interface, the SDK provides a practical workflow for converting, packaging, and deploying ML models in real-time applications such as games. + +### SDK Component Summary + +| Component | Description | Usage Context | GitHub link +|------------------|------------------------------------------------------------------------------------------------------|-------------------------------------|--------------| +| **Model Converter** | Converts TOSA IR into SPIR-V graphs and packages them into `.vgf` files for runtime execution. | Used in asset pipelines for model deployment | https://github.com/arm/ai-ml-sdk-model-converter | +| **VGF Library** | Lightweight runtime decoder for `.vgf` files containing graphs, constants, and shaders. | Integrate into game engine to load/use graphs | https://github.com/arm/ai-ml-sdk-vgf-library | +| **Scenario Runner** | Executes ML workloads declaratively using JSON-based scenario descriptions. | Ideal for rapid prototyping and validation | https://github.com/arm/ai-ml-sdk-scenario-runner | +| **Emulation Layer** | Vulkan layer that emulates data graph and tensor extensions using compute shaders. | For testing on devices without native ML extensions for Vulkan support | https://github.com/arm/ai-ml-emulation-layer-for-vulkan | + + +## About the Hugging Face release + +Visit the [NSS model page on Hugging Face](https://huggingface.co/Arm/neural-super-sampling) + +The landing page contains a minimal example - a _scenario_ - to run NSS with an actual frame. It contains a Windows-compatible Scenario Runner binary, the VGF model, and a single frame of input and expected output data. This allows you to run an end-to-end flow, and the landing page provides resources to explore the VGF model in more detail. + +## Next steps + +In the following section, you’ll explore how to debug and inspect the workloads in this Learning Path using RenderDoc. diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/5-renderdoc.md b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/5-renderdoc.md new file mode 100644 index 0000000000..569e22e741 --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/5-renderdoc.md @@ -0,0 +1,66 @@ +--- +title: Use RenderDoc to debug and analyze workloads +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Debug and profile workloads with RenderDoc + + +Integrating machine learning into real-time rendering makes frame-level inspection and performance analysis critical. RenderDoc helps you visualize and debug the workloads by letting you step through frames, examine tensors, and inspect Vulkan API calls. + +RenderDoc is a powerful GPU frame capture tool that lets you: + +- Step through a frame’s rendering process +- Inspect Vulkan API calls +- View shader inputs and outputs +- Examine GPU resource states and memory usage + +## When to use RenderDoc with the samples + +RenderDoc can help in scenarios such as: + +- Diagnosing unexpected visual output by stepping through draw calls +- Analyzing the order and behavior of Vulkan API calls +- Investigating memory consumption or GPU resource state +- Validating execution of data graph pipelines or identifying sync issues + +## Installing Arm Performance Studio (includes RenderDoc) + +To use RenderDoc with ML extensions, install the Arm-customized version via Performance Studio: + +1. **Download Arm Performance Studio** from the [Arm Developer website](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio#Downloads). The minimum version to use is `2025.4` +2. Run the installer: + `Arm_Performance_Studio__windows_x86-64.exe` +3. Follow the installation instructions. + +Once installed, launch RenderDoc for Arm GPUs via the Windows Start menu. + +## Capture Vulkan frames with RenderDoc + +You can capture and inspect Vulkan Samples that use ML extensions for Vulkan, including API calls such as `vkCreateTensorARM` and structures like `VK_STRUCTURE_TYPE_TENSOR_DESCRIPTION_ARM`. + +RenderDoc is especially useful for visualizing tensor operations, inspecting resource bindings, and verifying correct data graph pipeline execution. + +## Capture with RenderDoc + +1. **Open RenderDoc**, and in the main window, go to the **Launch Application** section. +2. Configure the following fields: + - **Executable Path**: Path to the built executable `vulkan_samples.exe`. + - **Working Directory**: Path to the root of the Vulkan Samples project. + - **Command-line Arguments**: + ``` + sample simple_tensor_and_data_graph + ``` + You can substitute `simple_tensor_and_data_graph` with any of the other sample names as needed. +3. Click **Launch**. The selected sample will start running. +4. Once the application window is active, press **F12** to capture a frame. +5. After the frame is captured, it will appear in RenderDoc’s capture list. Double-click it to explore the captured frame and inspect ML extensions for Vulkan calls in detail. + +## Learn more + +This workflow enables close inspection of how ML graphs are built and executed within Vulkan — an essential tool when optimizing pipelines or debugging integration issues. If you want to learn more about RenderDoc for Arm GPUs, you can check out the [Debug With RenderDoc User Guide](https://developer.arm.com/documentation/109669/latest). + +Move on to the next section to review further resources on what is new, and what is coming. \ No newline at end of file diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/6-wrapping-up.md b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/6-wrapping-up.md new file mode 100644 index 0000000000..15b64371cb --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/6-wrapping-up.md @@ -0,0 +1,29 @@ +--- +title: Wrapping up +weight: 7 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## What you’ve learned and what’s next + +With these tools and samples, you're ready to explore and experiment with neural graphics on Vulkan. Whether you're integrating neural super sampling into a full-scale game or learning to build ML pipelines from scratch, the Neural Graphics Development kit provides practical, extensible building blocks for real-time workloads. + + +In this Learning Path, you’ve: + +- Explored the **Vulkan Samples** to understand data graph pipeline structure +- Reviewed ML integration workflows using the **ML SDK for Vulkan** and the **VGF** format +- Debugged and analyzed the workloads using **RenderDoc** + +These components are designed to accelerate your development, provide insight into neural upscaling pipelines, and support experimentation with cutting-edge GPU features. + +Explore more: + +- [Neural Graphics Development Kit landing page](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics-for-mobile) - overview of Arm's tools for neural graphics +- [Vulkan Samples](https://github.com/ARM-software/Vulkan-Samples) - demos for the ML Extensions for Vulkan +- [Building for Tomorrow: Try Arm Neural Super-Sampling Today with ML Extensions for Vulkan and Unreal](https://community.arm.com/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-to-access-arm-neural-super-sampling) - getting started with the NSS use-case +- [How Arm Neural Super Sampling Works](https://community.arm.com/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-arm-neural-super-sampling-works) - deep dive into the NSS use-case + +Happy coding, and welcome to the future of real-time neural graphics! diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/_index.md b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/_index.md new file mode 100644 index 0000000000..35ada0f44f --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/_index.md @@ -0,0 +1,65 @@ +--- +title: Get started with neural graphics using ML Extensions for Vulkan® + +minutes_to_complete: 30 + +who_is_this_for: This is an advanced topic for engine developers interested in learning about neural graphics using ML Extensions for Vulkan. + +learning_objectives: + - Explain the purpose of neural graphics and the role of ML Extensions for Vulkan + - Set up the ML Emulation Layers for Vulkan to enable the extensions + - Run a sample Vulkan application that uses the extensions + - Debug the flow using RenderDoc + +prerequisites: + - Windows 11 development machine + - Visual Studio 2022 + - Visual Studio workload - Desktop development with C++ + - Visual Studio workload - .NET desktop build tools + + + +author: Annie Tallund + +### Tags +skilllevels: Advanced +subjects: ML +armips: + - Mali +tools_software_languages: + - Vulkan + - RenderDoc +operatingsystems: + - Windows + + +further_reading: + - resource: + title: Neural Graphics Development Kit + link: https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics-for-mobile + type: website + - resource: + title: ML SDK for Vulkan + link: https://github.com/arm/ai-ml-sdk-for-vulkan + type: website + - resource: + title: Vulkan Samples + link: https://github.com/ARM-software/Vulkan-Samples + type: website + - resource: + title: RenderDoc for Arm GPUs + link: https://developer.arm.com/Tools%20and%20Software/RenderDoc%20for%20Arm%20GPUs + type: documentation + - resource: + title: How Arm Neural Super Sampling works + link: https://community.arm.com/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-arm-neural-super-sampling-works + type: blog + + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/_next-steps.md b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/images/load_layers.png b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/images/load_layers.png new file mode 100644 index 0000000000..84c51856a9 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/images/load_layers.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/images/verify_layers.png b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/images/verify_layers.png new file mode 100644 index 0000000000..51e7a45b62 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/images/verify_layers.png differ