Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
116 changes: 116 additions & 0 deletions com.unity.render-pipelines.high-definition/Documentation~/AOVs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# Arbitrary Output Variables

Arbitrary Output Variables (AOVs) are additional images that an [HDRP Camera](HDRP-Camera.md) can generate. They can output additional information per pixel, which you can use later for compositing or additional image processing (such as denoising).

Here is an example of three AOVs, containing from left to right the Albedo, Normal, and Object ID of each pixel:

![](Images/aov_example.png)

In HDRP, you can access and configure AOVs in the following ways:
- Using the [HDRP Compositor tool](Compositor-Main).
- Using the [Unity Recorder](https://docs.unity3d.com/Packages/com.unity.recorder@latest/index.html) and the [AOV Recorder](https://docs.unity3d.com/Packages/com.unity.aovrecorder@latest/index.html) packages.
- Using the scripting API to set up a custom AOV request in any HDRP Camera in your Scene.

The first two options offer a limited selection of AOVs in their User Interface, while the third option allows for much more flexibility on what data an HDRP Camera can output.

## Material property AOVs
Here is the list of Material properties that you can access with the AOV API.

| Material property | Description |
|-------------------|---------------------------|
| **Normal** | Outputs the surface albedo. |
| **Albedo** | Outputs the surface normal. |
| **Smoothness** | Outputs the surface smoothness. |
| **Ambient Occlusion** | Outputs the ambient occlusion (N/A for AxF). |
| **Specular** | Outputs the surface specularity. |
| **Alpha** | Outputs the surface alpha (pixel coverage). |

## Lighting selection with AOVs
You can use AOVs to output the contribution from a selected list of [Lights](Light-Component.md), or you can use them to output only specific components of the lighting.

| Lighting property | Description |
|-------------------|---------------------------|
| **DiffuseOnly** | Renders only diffuse lighting (direct and indirect). |
| **SpecularOnly** | Renders only specular lighting (direct and indirect). |
| **DirectDiffuseOnly** | Renders only direct diffuse lighting. |
| **DirectSpecularOnly** | Renders only direct specular lighting. |
| **IndirectDiffuseOnly** | Renders only indirect diffuse lighting. |
| **ReflectionOnly** | Renders only reflections. |
| **RefractionOnly** | Renders only refractions. |
| **EmissiveOnly** | Renders only emissive lighting. |

## Custom Pass AOVs
Finally, you can use AOVs to output the results of [custom passes](Custom-Pass.md). In particular, you can output the cumulative results of all custom passes that are active on every custom pass injection point. This can be useful to output arbitrary information that custom passes compute, such as the Object ID of the Scene GameObjects.

## Scripting API example
The following example script outputs albedo AOVs from an HDRP Camera and saves the resulting frames to disk as a sequence of .png images. To use the example script, attach it to an HDRP Camera and enter Play Mode.
```
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;
using UnityEngine.Rendering.HighDefinition.Attributes;

public class AovRecorder : MonoBehaviour
{
RTHandle m_TmpRT; // The RTHandle used to render the AOV
Texture2D m_ReadBackTexture;

int m_Frames = 0;

// Start is called before the first frame update
void Start()
{
var camera = gameObject.GetComponent<Camera>();
if (camera != null)
{
var hdAdditionalCameraData = gameObject.GetComponent<HDAdditionalCameraData>();
if (hdAdditionalCameraData != null)
{
// initialize a new AOV request
var aovRequest = AOVRequest.NewDefault();

AOVBuffers[] aovBuffers = null;
CustomPassAOVBuffers[] customPassAovBuffers = null;

// Request an AOV with the surface albedo
aovRequest.SetFullscreenOutput(MaterialSharedProperty.Albedo);
aovBuffers = new[] { AOVBuffers.Color };

// Allocate the RTHandle that will store the intermediate results
m_TmpRT = RTHandles.Alloc(camera.pixelWidth, camera.pixelHeight);

// Add the request to a new AOVRequestBuilder
var aovRequestBuilder = new AOVRequestBuilder();
aovRequestBuilder.Add(aovRequest,
bufferId => m_TmpRT,
null,
aovBuffers,
customPassAovBuffers,
bufferId => m_TmpRT,
(cmd, textures, customPassTextures, properties) =>
{
// callback to read back the AOV data and write them to disk
if (textures.Count > 0)
{
m_ReadBackTexture = m_ReadBackTexture ?? new Texture2D(camera.pixelWidth, camera.pixelHeight, TextureFormat.RGBAFloat, false);
RenderTexture.active = textures[0].rt;
m_ReadBackTexture.ReadPixels(new Rect(0, 0, camera.pixelWidth, camera.pixelHeight), 0, 0, false);
m_ReadBackTexture.Apply();
RenderTexture.active = null;
byte[] bytes = m_ReadBackTexture.EncodeToPNG();
System.IO.File.WriteAllBytes($"output_{m_Frames++}.png", bytes);
}

});

// Now build the AOV request
var aovRequestDataCollection = aovRequestBuilder.Build();

// And finally set the request to the camera
hdAdditionalCameraData.SetAOVRequests(aovRequestDataCollection);
}
}
}
}

```
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
## Multiframe rendering and accumulation

Some rendering techniques, such as [path tracing](Ray-Tracing-Path-Tracing.md) and accumulation motion blur, combine information from multiple intermediate sub-frames to create a final "converged" frame. Each intermediate sub-frame can correspond to a slightly different point in time, effectively computing physically-based accumulation motion blur, which properly takes into account object rotations, deformations, material or lighting changes.

The High Definition Render Pipeline (HDRP) provides a scripting API that allows you to control the creation of sub-frames and the convergence of multi-frame rendering effects. In particular, the API allows you to control the number of intermediate sub-frames (samples) and the points in time that correspond to each one of them. Furthermore, you can use a shutter profile to control the weights of each sub-frame. A shutter profile describes how fast the physical camera opens and closes its shutter.

This API is particularly useful when recording path-traced movies. Normally, when editing a Scene, the convergence of path tracing restarts every time the Scene changes, to provide artists an interactive editing workflow that allows them to quickly visualize their changes. However such behavior is not desirable during recording.

The following image shows a rotating GameObject with path tracing and accumulation motion blur, recorded using the multi-frame recording API.

![](Images/path_tracing_recording.png)

## API overview
The recording API is available in HDRP and has three calls:
- **BeginRecording**: Call this when you want to start a multi-frame render.
- **PrepareNewSubFrame**: Call this before rendering a new subframe.
- **EndRecording**: Call this when you want to stop the multi-frame render.

The only call that takes any parameters is **BeginRecording**. Here is an explanation of the parameters:

| Parameter | Description |
|-------------------|---------------------------|
| **Samples** | The number of sub-frames to accumulate. This parameter overrides the number of path tracing samples in the [Volume](Volumes.md). |
| **ShutterInterval** | The amount of time the shutter is open between two subsequent frames. A value of **0** results in an instant shutter (no motion blur). A value of **1** means there is no (time) gap between two subsequent frames. |
| **ShutterProfile** | An animation curve that specifies the shutter position during the shutter interval. Alternatively, you can also provide the time the shutter was fully open; and when the shutter begins closing. |

The example script below demonstrates how to use these API calls.

## Scripting API example
The following example demonstrates how to use the multi-frame rendering API in your scripts to properly record converged animation sequences with path tracing and/or accumulation motion blur. To use it, attach the script to a Camera in your Scene and, in the component's context menu, click the “Start Recording” and “Stop Recording” actions.

```
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;

public class FrameManager : MonoBehaviour
{
// The number of samples used for accumumation.
public int samples = 128;
[Range(0.0f, 1.0f)]
public float shutterInterval = 1.0f;

// The time during shutter interval when the shutter is fully open
[Range(0.0f, 1.0f)]
public float shutterFullyOpen = 0.25f;

// The time during shutter interval when the shutter begins closing.
[Range(0.0f, 1.0f)]
public float shutterBeginsClosing = 0.75f;

bool m_Recording = false;
int m_Iteration = 0;
int m_RecordedFrames = 0;

[ContextMenu("Start Recording")]
void BeginMultiframeRendering()
{
RenderPipelineManager.beginFrameRendering += PrepareSubFrameCallBack;
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
renderPipeline.BeginRecording(samples, shutterInterval, shutterFullyOpen, shutterBeginsClosing);
m_Recording = true;
m_Iteration = 0;
m_RecordedFrames = 0;
}

[ContextMenu("Stop Recording")]
void StopMultiframeRendering()
{
RenderPipelineManager.beginFrameRendering -= PrepareSubFrameCallBack;
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
renderPipeline.EndRecording();
m_Recording = false;
}

void PrepareSubFrameCallBack(ScriptableRenderContext cntx, Camera[] cams)
{
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
if (renderPipeline != null && m_Recording)
{
renderPipeline.PrepareNewSubFrame();
m_Iteration++;
}

if (m_Recording && m_Iteration % samples == 0)
{
ScreenCapture.CaptureScreenshot($"frame_{m_RecordedFrames++}.png");
}
}

void OnDestroy()
{
if (m_Recording)
{
StopMultiframeRendering();
}
}

void OnValidate()
{
// Make sure the shutter will begin closing sometime after it is fully open (and not before)
shutterBeginsClosing = Mathf.Max(shutterFullyOpen, shutterBeginsClosing);
}
}
```

## Shutter profiles
The **BeginRecording** call allows you to specify how fast the camera shutter opens and closes. The speed of the camera shutter defines the so called “shutter profile”. The following image demonstrates how different shutter profiles affect the appearance of motion blur on a blue sphere moving from left to right.

![](Images/shutter_profiles.png)

In all cases, the speed of the sphere is the same. The only change is the shutter profile. The horizontal axis of the profile diagram corresponds to time, and the vertical axis corresponds to the openning of the shutter.

You can easily define the first three profiles without using an animation curve by setting the open, close parameters to (0,1), (1,1), and (0.25, 0.75) respectively. The last profile requires the use of an animation curve.

In this example, you can see that the slow open profile creates a motion trail appearance for the motion blur, which might be more desired for artists. On the other hand, the smooth open and close profile creates smoother animations than the slow open or uniform profiles.

## Limitations
The multi-frame rendering API internally changes the `Time.timeScale` of the Scene. This means that:
- You cannot have different accumulation motion blur parameters per camera.
- Projects that already modify this parameter per frame are not be compatible with this feature.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,14 @@ This example shows how the **Blade Count** and **Curvature** properties affect t
* On the left side, there is a five blade iris that is slightly open; producing a pentagonal bokeh.
* On the right side, there is a five blade iris that is wide open; producing a circular bokeh.

![](Images/Post-ProcessingDepthofField2.png)
![](Images/Post-ProcessingDepthofField2.png)

## Path-traced depth of field

If you enable [path tracing](Ray-Tracing-Path-Tracing) and set **Focus Mode** to **Use Physical Camera**, HDRP computes depth of field directly during path tracing instead of as a post-processing effect.

Path-traced depth of field produces images without any artifacts, apart from noise when using insufficient path-tracing samples. To reduce the noise level, increase the number of samples from the [Path Tracing](Ray-Tracing-Path-Tracing) settings and/or de-noise the final frame.

HDRP computes path-traced depth of field at full resolution and ignores any quality settings from the Volume.

![](Images/Path-traced-DoF.png)
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Light Cluster

To compute light bounces for ray-traced effects, such as [Reflections](Ray-Traced-Reflections.html), [Global Illumination](Ray-Traced-Global-Illumination.html), [Recursive Rendering](Ray-Tracing-Recursive-Rendering.html), or Path Tracing. HDRP uses a structure to store the set of [Lights](Light-Component.html) that affect each region. In rasterization, HDRP uses the tile structure for opaque objects and the cluster structure for transparent objects. The main difference between these two structures and this one used for ray tracing is that this one is not based on the Camera frustum.
To compute light bounces for ray-traced effects, such as [Reflections](Ray-Traced-Reflections.html), [Global Illumination](Ray-Traced-Global-Illumination.html), [Recursive Rendering](Ray-Tracing-Recursive-Rendering.html), or path tracing. HDRP uses a structure to store the set of [Lights](Light-Component.html) that affect each region. In rasterization, HDRP uses the tile structure for opaque objects and the cluster structure for transparent objects. The main difference between these two structures and this one used for ray tracing is that this one is not based on the Camera frustum.
For ray tracing, HDRP builds an axis-aligned grid which, in each cell, stores the list of Lights to fetch if an intersection occurs in that cell. Use this [Volume Override](Volume-Components.html) to change how HDRP builds this structure.

![](Images/RayTracingLightCluster1.png)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@ Path tracing shares the general requirements and setup as other ray tracing effe

## Add path tracing to your Scene

Path Tracing uses the [Volume](Volumes.html) framework, so to enable this feature, and modify its properties, you must add a Path Tracing override to a [Volume](Volumes.html) in your Scene. To do this:
Path tracing uses the [Volume](Volumes.html) framework, so to enable this feature, and modify its properties, you must add a Path Tracing override to a [Volume](Volumes.html) in your Scene. To do this:

1. In the Scene or Hierarchy view, select a GameObject that contains a Volume component to view it in the Inspector.
2. In the Inspector, select Add Override > Path Tracing.
3. In the Inspector for the Path Tracing Volume Override, check the Enable option. If you do not see the Enable option, make sure your HDRP Project supports ray tracing. For information on setting up ray tracing in HDRP, see [getting started with ray tracing](Ray-Tracing-Getting-Started.html). This switches HDRP to path traced rendering and you should initially see a noisy image that converges towards a clean result.
3. In the Inspector for the Path Tracing Volume Override, check the Enable option. If you do not see the Enable option, make sure your HDRP Project supports ray tracing. For information on setting up ray tracing in HDRP, see [getting started with ray tracing](Ray-Tracing-Getting-Started.html). This switches HDRP to path-traced rendering and you should initially see a noisy image that converges towards a clean result.
4. If the image does not converge over time, select the drop-down next to the effect toggle and enable Animated Materials.

![](Images/RayTracingPathTracing3.png)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,8 @@
* [Motion Vectors](Motion-Vectors)
* [Anti-Aliasing](Anti-Aliasing)
* [Alpha Output](Alpha-Output)
* [Arbitrary Output Variables](AOVs)
* [Multiframe Rendering and Accumulation](Accumulation)
* Post-processing
* [Post-processing in HDRP](Post-Processing-Main)
* [Effect Execution Order](Post-Processing-Execution-Order)
Expand Down