Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,34 +1,44 @@
---
title: Install Model Gym and Explore Neural Graphics Examples
title: Install Model Gym and explore neural graphics examples
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## What is Neural Graphics?
## What is neural graphics?

Neural graphics is an intersection of graphics and machine learning. Rather than relying purely on traditional GPU pipelines, neural graphics integrates learned models directly into the rendering stack. The techniques are particularly powerful on mobile devices, where battery life and performance constraints limit traditional compute-heavy rendering approaches. The goal is to deliver high visual fidelity without increasing GPU cost. This is achieved by training and deploying compact neural networks optimized for the device's hardware.
Neural graphics is an intersection of graphics and machine learning. Rather than relying purely on traditional GPU pipelines, neural graphics integrates learned models directly into the rendering stack. These techniques are particularly powerful on mobile devices, where battery life and performance constraints limit traditional compute-heavy rendering approaches. Your goal is to deliver high visual fidelity without increasing GPU cost. You achieve this by training and deploying compact neural networks optimized for your device's hardware.

## How does Arm support neural graphics?

Arm enables neural graphics through the [**Neural Graphics Development Kit**](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics): a set of open-source tools that let developers train, evaluate, and deploy ML models for graphics workloads.

Arm enables neural graphics through the [**Neural Graphics Development Kit**](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics): a set of open-source tools that let you train, evaluate, and deploy ML models for graphics workloads.


At its core are the ML Extensions for Vulkan, which bring native ML inference into the GPU pipeline using structured compute graphs. These extensions (`VK_ARM_tensors` and `VK_ARM_data_graph`) allow real-time upscaling and similar effects to run efficiently alongside rendering tasks.

The neural graphics models can be developed using well-known ML frameworks like PyTorch, and exported to deployment using Arm's hardware-aware pipeline. The workflow converts the model to `.vgf` via the TOSA intermediate representation, making it possible to do tailored model development for you game use-case. This Learning Path focuses on **Neural Super Sampling (NSS)** as the use case for training, evaluating, and deploying neural models using a toolkit called the [**Neural Graphics Model Gym**](https://github.com/arm/neural-graphics-model-gym). To learn more about NSS, you can check out the [resources on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). Additonally, Arm has developed a set of Vulkan Samples to get started. Specifically, `.vgf` format is introduced in the `postprocessing_with_vgf` one. The Vulkan Samples and over-all developer resources for neural graphics is covered in the [introductory Learning Path](/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample).

Starting in 2026, Arm GPUs will feature dedicated neural accelerators, optimized for low-latency inference in graphics workloads. To help developers get started early, Arm provides the ML Emulation Layers for Vulkan that simulate future hardware behavior, so you can build and test models now.

You can develop neural graphics models using well-known ML frameworks like PyTorch, then export them for deployment with Arm's hardware-aware pipeline. The workflow converts your model to `.vgf` using the TOSA intermediate representation, making it possible to tailor model development for your game use case. In this Learning Path, you will focus on **Neural Super Sampling (NSS)** as the primary example for training, evaluating, and deploying neural models using the [**Neural Graphics Model Gym**](https://github.com/arm/neural-graphics-model-gym). To learn more about NSS, see the [resources on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). Arm has also developed a set of Vulkan Samples to help you get started. The `.vgf` format is introduced in the `postprocessing_with_vgf` sample. For a broader overview of neural graphics developer resources, including the Vulkan Samples, see the introductory Learning Path [Get started with neural graphics using ML Extensions for Vulkan](/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/).



Starting in 2026, Arm GPUs will feature dedicated neural accelerators, optimized for low-latency inference in graphics workloads. To help you get started early, Arm provides the ML Emulation Layers for Vulkan that simulate future hardware behavior, so you can build and test models now.

## What is the Neural Graphics Model Gym?


The Neural Graphics Model Gym is an open-source toolkit for fine-tuning and exporting neural graphics models. It is designed to streamline the entire model lifecycle for graphics-focused use cases, like NSS.

Model Gym gives you:
With Model Gym, you can:

- Train and evaluate models using a PyTorch-based API
- Export models to `.vgf` using ExecuTorch for real-time use in game development
- Take advantage of quantization-aware training (QAT) and post-training quantization (PTQ) with ExecuTorch
- Use an optional Docker setup for reproducibility

You can choose to work with Python notebooks for rapid experimentation or use the command-line interface for automation. This Learning Path will walk you through the demonstrative notebooks and prepare you to start using the CLI for your own model development.

- A training and evaluation API built on PyTorch
- Model export to .vgf using ExecuTorch for real-time use in game development
- Support for quantization-aware training (QAT) and post-training quantization (PTQ) using ExecuTorch
- Optional Docker setup for reproducibility

The toolkit supports workflows via both Python notebooks (for rapid experimentation) and command-line interface. This Learning Path will walk you through the demonstrative notebooks, and prepare you to start using the CLI for your own model development.
You're now ready to set up your environment and start working with neural graphics models. Keep going!
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,11 @@ weight: 3
layout: learningpathall
---

In this section, you will install a few dependencies into your Ubuntu environment. You'll need a working Python 3.10+ environment with some ML and system dependencies. Make sure Python is installed by verifying that the version is >3.10:
## Overview

In this section, you will install a few dependencies into your Ubuntu environment. You'll need a working Python 3.10+ environment with some ML and system dependencies.

Start by making sure Python is installed by verifying that the version is >3.10:

```bash
python3 --version
Expand Down Expand Up @@ -34,10 +38,10 @@ From inside the `neural-graphics-model-gym-examples/` folder, run the setup scri
./setup.sh
```

This will:
- create a Python virtual environment called `nb-env`
- install the `ng-model-gym` package and required dependencies
- download the datasets and weights needed to run the notebooks
This will do the following:
- Create a Python virtual environment called `nb-env`
- Install the `ng-model-gym` package and required dependencies
- Download the datasets and weights needed to run the notebooks

Activate the virtual environment:

Expand All @@ -55,4 +59,5 @@ print("Torch version:", torch.__version__)
print("Model Gym version:", ng_model_gym.__version__)
```

You’re now ready to start walking through the training and evaluation steps.
You’ve completed your environment setup - great work! You’re now ready to start walking through the training and evaluation steps.

Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,18 @@ weight: 4
### FIXED, DO NOT MODIFY
layout: learningpathall
---
## About NSS

In this section, you'll get hands-on with how you can use the model gym to fine-tune the NSS use-case.

## About NSS

Arm Neural Super Sampling (NSS) is an upscaling technique designed to solve a growing challenge in real-time graphics: delivering high visual quality without compromising performance or battery life. Instead of rendering every pixel at full resolution, NSS uses a neural network to intelligently upscale frames, freeing up GPU resources and enabling smoother, more immersive experiences on mobile devices.

The NSS model is available in two formats:
The NSS model is available in two formats, as shown in the table below:

| Model format | File extension | Used for |
|--------------|----------------|--------------------------------------------------------------------------|
| PyTorch | .pt | training, fine-tuning, or evaluation in or scripts using the Model Gym |
| VGF | .vgf | for deployment using ML Extensions for Vulkan on Arm-based hardware or emulation layers |
| PyTorch | `.pt` | training, fine-tuning, or evaluation in or scripts using the Model Gym |
| VGF | `.vgf` | for deployment using ML Extensions for Vulkan on Arm-based hardware or emulation layers |

Both formats are available in the [NSS repository on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). You'll also be able to explore config files, model metadata, usage details and detailed documentation on the use-case.

Expand Down Expand Up @@ -62,6 +61,8 @@ neural-graphics-model-gym-examples/tutorials/nss/model_evaluation_example.ipynb

At the end you should see a visual comparison of the NSS upscaling and the ground truth image.

Proceed to the final section to view the model structure and explore further resources.

You’ve completed the training and evaluation steps. Proceed to the final section to view the model structure and explore further resources.



Original file line number Diff line number Diff line change
Expand Up @@ -12,27 +12,29 @@ Model Explorer is a visualization tool for inspecting neural network structures

This lets you inspect model architecture, tensor shapes, and graph connectivity before deployment. This can be a powerful way to debug and understand your exported neural graphics models.

## Setting up the VGF adapter
## Set up the VGF adapter

The VGF adapter extends Model Explorer to support `.vgf` files exported from the Model Gym toolchain.

### Install the VGF adapter with pip
## Install the VGF adapter with pip

Run:

```bash
pip install vgf-adapter-model-explorer
```

The source code is available on [GitHub](https://github.com/arm/vgf-adapter-model-explorer).
The VGF adapter model explorer source code is available on [GitHub](https://github.com/arm/vgf-adapter-model-explorer).

### Install Model Explorer
## Install Model Explorer

The next step is to make sure the Model Explorer itself is installed. Use pip to set it up:

```bash
pip install torch ai-edge-model-explorer
```

### Launch the viewer
## Launch the viewer

Once installed, launch the explorer with the VGF adapter:

Expand All @@ -44,6 +46,4 @@ Use the file browser to open the `.vgf` model exported earlier in your training

## Wrapping up

Through this Learning Path, you’ve learned what neural graphics is and why it matters for game performance. You’ve stepped through the process of training and evaluating an NSS model using PyTorch and the Model Gym, and seen how to export that model into VGF (.vgf) for real-time deployment. You’ve also explored how to visualize and inspect the model’s structure using Model Explorer.

As a next step, you can head over to the [Model Training Gym repository](https://github.com/arm/neural-graphics-model-gym/tree/main) documentation to explore integration into your own game development workflow. You’ll find resources on fine-tuning, deeper details about the training and export process, and everything you need to adapt to your own content and workflows.
Through this Learning Path, you’ve learned what neural graphics is and why it matters for game performance. You’ve stepped through the process of training and evaluating an NSS model using PyTorch and the Model Gym, and seen how to export that model into VGF (.vgf) for real-time deployment. You’ve also explored how to visualize and inspect the model’s structure using Model Explorer. You can now explore the Model Training Gym repository for deeper integration and to keep building your skills.
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
---
title: Fine-Tuning Neural Graphics Models with Model Gym

draft: true
cascade:
draft: true

title: Fine-tuning neural graphics models with Model Gym

minutes_to_complete: 45

who_is_this_for: This is an advanced topic for developers exploring neural graphics and interested in training and deploying upscaling models like Neural Super Sampling (NSS) using PyTorch and Arm’s hardware-aware backend.
Expand Down Expand Up @@ -50,10 +46,15 @@ further_reading:
title: NSS on HuggingFace
link: https://huggingface.co/Arm/neural-super-sampling
type: website
- resource:
title: Vulkan ML Sample Learning Path
link: /learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/
type: learningpath


### FIXED, DO NOT MODIFY
weight: 1
layout: "learningpathall"
learning_path_main_page: "yes"
# ================================================================================
weight: 1 # _index.md always has weight of 1 to order correctly
layout: "learningpathall" # All files under learning paths have this same wrapper
learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
---