diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/1-introduction.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/1-introduction.md index 555322717a..24838dc4b0 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/1-introduction.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/1-introduction.md @@ -1,34 +1,44 @@ --- -title: Install Model Gym and Explore Neural Graphics Examples +title: Install Model Gym and explore neural graphics examples weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## What is Neural Graphics? +## What is neural graphics? -Neural graphics is an intersection of graphics and machine learning. Rather than relying purely on traditional GPU pipelines, neural graphics integrates learned models directly into the rendering stack. The techniques are particularly powerful on mobile devices, where battery life and performance constraints limit traditional compute-heavy rendering approaches. The goal is to deliver high visual fidelity without increasing GPU cost. This is achieved by training and deploying compact neural networks optimized for the device's hardware. +Neural graphics is an intersection of graphics and machine learning. Rather than relying purely on traditional GPU pipelines, neural graphics integrates learned models directly into the rendering stack. These techniques are particularly powerful on mobile devices, where battery life and performance constraints limit traditional compute-heavy rendering approaches. Your goal is to deliver high visual fidelity without increasing GPU cost. You achieve this by training and deploying compact neural networks optimized for your device's hardware. ## How does Arm support neural graphics? -Arm enables neural graphics through the [**Neural Graphics Development Kit**](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics): a set of open-source tools that let developers train, evaluate, and deploy ML models for graphics workloads. + +Arm enables neural graphics through the [**Neural Graphics Development Kit**](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics): a set of open-source tools that let you train, evaluate, and deploy ML models for graphics workloads. + At its core are the ML Extensions for Vulkan, which bring native ML inference into the GPU pipeline using structured compute graphs. These extensions (`VK_ARM_tensors` and `VK_ARM_data_graph`) allow real-time upscaling and similar effects to run efficiently alongside rendering tasks. -The neural graphics models can be developed using well-known ML frameworks like PyTorch, and exported to deployment using Arm's hardware-aware pipeline. The workflow converts the model to `.vgf` via the TOSA intermediate representation, making it possible to do tailored model development for you game use-case. This Learning Path focuses on **Neural Super Sampling (NSS)** as the use case for training, evaluating, and deploying neural models using a toolkit called the [**Neural Graphics Model Gym**](https://github.com/arm/neural-graphics-model-gym). To learn more about NSS, you can check out the [resources on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). Additonally, Arm has developed a set of Vulkan Samples to get started. Specifically, `.vgf` format is introduced in the `postprocessing_with_vgf` one. The Vulkan Samples and over-all developer resources for neural graphics is covered in the [introductory Learning Path](/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample). -Starting in 2026, Arm GPUs will feature dedicated neural accelerators, optimized for low-latency inference in graphics workloads. To help developers get started early, Arm provides the ML Emulation Layers for Vulkan that simulate future hardware behavior, so you can build and test models now. + +You can develop neural graphics models using well-known ML frameworks like PyTorch, then export them for deployment with Arm's hardware-aware pipeline. The workflow converts your model to `.vgf` using the TOSA intermediate representation, making it possible to tailor model development for your game use case. In this Learning Path, you will focus on **Neural Super Sampling (NSS)** as the primary example for training, evaluating, and deploying neural models using the [**Neural Graphics Model Gym**](https://github.com/arm/neural-graphics-model-gym). To learn more about NSS, see the [resources on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). Arm has also developed a set of Vulkan Samples to help you get started. The `.vgf` format is introduced in the `postprocessing_with_vgf` sample. For a broader overview of neural graphics developer resources, including the Vulkan Samples, see the introductory Learning Path [Get started with neural graphics using ML Extensions for Vulkan](/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/). + + + +Starting in 2026, Arm GPUs will feature dedicated neural accelerators, optimized for low-latency inference in graphics workloads. To help you get started early, Arm provides the ML Emulation Layers for Vulkan that simulate future hardware behavior, so you can build and test models now. ## What is the Neural Graphics Model Gym? + The Neural Graphics Model Gym is an open-source toolkit for fine-tuning and exporting neural graphics models. It is designed to streamline the entire model lifecycle for graphics-focused use cases, like NSS. -Model Gym gives you: +With Model Gym, you can: + +- Train and evaluate models using a PyTorch-based API +- Export models to `.vgf` using ExecuTorch for real-time use in game development +- Take advantage of quantization-aware training (QAT) and post-training quantization (PTQ) with ExecuTorch +- Use an optional Docker setup for reproducibility + +You can choose to work with Python notebooks for rapid experimentation or use the command-line interface for automation. This Learning Path will walk you through the demonstrative notebooks and prepare you to start using the CLI for your own model development. -- A training and evaluation API built on PyTorch -- Model export to .vgf using ExecuTorch for real-time use in game development -- Support for quantization-aware training (QAT) and post-training quantization (PTQ) using ExecuTorch -- Optional Docker setup for reproducibility -The toolkit supports workflows via both Python notebooks (for rapid experimentation) and command-line interface. This Learning Path will walk you through the demonstrative notebooks, and prepare you to start using the CLI for your own model development. +You're now ready to set up your environment and start working with neural graphics models. Keep going! diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/2-devenv.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/2-devenv.md index 013148bcaa..3e679dbc31 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/2-devenv.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/2-devenv.md @@ -6,7 +6,11 @@ weight: 3 layout: learningpathall --- -In this section, you will install a few dependencies into your Ubuntu environment. You'll need a working Python 3.10+ environment with some ML and system dependencies. Make sure Python is installed by verifying that the version is >3.10: +## Overview + +In this section, you will install a few dependencies into your Ubuntu environment. You'll need a working Python 3.10+ environment with some ML and system dependencies. + +Start by making sure Python is installed by verifying that the version is >3.10: ```bash python3 --version @@ -34,10 +38,10 @@ From inside the `neural-graphics-model-gym-examples/` folder, run the setup scri ./setup.sh ``` -This will: -- create a Python virtual environment called `nb-env` -- install the `ng-model-gym` package and required dependencies -- download the datasets and weights needed to run the notebooks +This will do the following: +- Create a Python virtual environment called `nb-env` +- Install the `ng-model-gym` package and required dependencies +- Download the datasets and weights needed to run the notebooks Activate the virtual environment: @@ -55,4 +59,5 @@ print("Torch version:", torch.__version__) print("Model Gym version:", ng_model_gym.__version__) ``` -You’re now ready to start walking through the training and evaluation steps. +You’ve completed your environment setup - great work! You’re now ready to start walking through the training and evaluation steps. + diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/3-model-training.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/3-model-training.md index 37deaf987c..7714c0202b 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/3-model-training.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/3-model-training.md @@ -5,19 +5,18 @@ weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- +## About NSS In this section, you'll get hands-on with how you can use the model gym to fine-tune the NSS use-case. -## About NSS - Arm Neural Super Sampling (NSS) is an upscaling technique designed to solve a growing challenge in real-time graphics: delivering high visual quality without compromising performance or battery life. Instead of rendering every pixel at full resolution, NSS uses a neural network to intelligently upscale frames, freeing up GPU resources and enabling smoother, more immersive experiences on mobile devices. -The NSS model is available in two formats: +The NSS model is available in two formats, as shown in the table below: | Model format | File extension | Used for | |--------------|----------------|--------------------------------------------------------------------------| -| PyTorch | .pt | training, fine-tuning, or evaluation in or scripts using the Model Gym | -| VGF | .vgf | for deployment using ML Extensions for Vulkan on Arm-based hardware or emulation layers | +| PyTorch | `.pt` | training, fine-tuning, or evaluation in or scripts using the Model Gym | +| VGF | `.vgf` | for deployment using ML Extensions for Vulkan on Arm-based hardware or emulation layers | Both formats are available in the [NSS repository on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). You'll also be able to explore config files, model metadata, usage details and detailed documentation on the use-case. @@ -62,6 +61,8 @@ neural-graphics-model-gym-examples/tutorials/nss/model_evaluation_example.ipynb At the end you should see a visual comparison of the NSS upscaling and the ground truth image. -Proceed to the final section to view the model structure and explore further resources. + +You’ve completed the training and evaluation steps. Proceed to the final section to view the model structure and explore further resources. + diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/4-model-explorer.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/4-model-explorer.md index 7d3e720066..998c8e71a6 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/4-model-explorer.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/4-model-explorer.md @@ -12,19 +12,21 @@ Model Explorer is a visualization tool for inspecting neural network structures This lets you inspect model architecture, tensor shapes, and graph connectivity before deployment. This can be a powerful way to debug and understand your exported neural graphics models. -## Setting up the VGF adapter +## Set up the VGF adapter The VGF adapter extends Model Explorer to support `.vgf` files exported from the Model Gym toolchain. -### Install the VGF adapter with pip +## Install the VGF adapter with pip + +Run: ```bash pip install vgf-adapter-model-explorer ``` -The source code is available on [GitHub](https://github.com/arm/vgf-adapter-model-explorer). +The VGF adapter model explorer source code is available on [GitHub](https://github.com/arm/vgf-adapter-model-explorer). -### Install Model Explorer +## Install Model Explorer The next step is to make sure the Model Explorer itself is installed. Use pip to set it up: @@ -32,7 +34,7 @@ The next step is to make sure the Model Explorer itself is installed. Use pip to pip install torch ai-edge-model-explorer ``` -### Launch the viewer +## Launch the viewer Once installed, launch the explorer with the VGF adapter: @@ -44,6 +46,4 @@ Use the file browser to open the `.vgf` model exported earlier in your training ## Wrapping up -Through this Learning Path, you’ve learned what neural graphics is and why it matters for game performance. You’ve stepped through the process of training and evaluating an NSS model using PyTorch and the Model Gym, and seen how to export that model into VGF (.vgf) for real-time deployment. You’ve also explored how to visualize and inspect the model’s structure using Model Explorer. - -As a next step, you can head over to the [Model Training Gym repository](https://github.com/arm/neural-graphics-model-gym/tree/main) documentation to explore integration into your own game development workflow. You’ll find resources on fine-tuning, deeper details about the training and export process, and everything you need to adapt to your own content and workflows. \ No newline at end of file +Through this Learning Path, you’ve learned what neural graphics is and why it matters for game performance. You’ve stepped through the process of training and evaluating an NSS model using PyTorch and the Model Gym, and seen how to export that model into VGF (.vgf) for real-time deployment. You’ve also explored how to visualize and inspect the model’s structure using Model Explorer. You can now explore the Model Training Gym repository for deeper integration and to keep building your skills. diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/_index.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/_index.md index 0c809ef4ef..e106b163bc 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/_index.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/_index.md @@ -1,10 +1,6 @@ --- -title: Fine-Tuning Neural Graphics Models with Model Gym - -draft: true -cascade: - draft: true - +title: Fine-tuning neural graphics models with Model Gym + minutes_to_complete: 45 who_is_this_for: This is an advanced topic for developers exploring neural graphics and interested in training and deploying upscaling models like Neural Super Sampling (NSS) using PyTorch and Arm’s hardware-aware backend. @@ -50,10 +46,15 @@ further_reading: title: NSS on HuggingFace link: https://huggingface.co/Arm/neural-super-sampling type: website + - resource: + title: Vulkan ML Sample Learning Path + link: /learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/ + type: learningpath ### FIXED, DO NOT MODIFY -weight: 1 -layout: "learningpathall" -learning_path_main_page: "yes" +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. ---