Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions examples/jsm/tsl/display/SSGINode.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,12 @@ import { clamp, normalize, reference, nodeObject, Fn, NodeUpdateType, uniform, v
const _quadMesh = /*@__PURE__*/ new QuadMesh();
const _size = /*@__PURE__*/ new Vector2();

// From Activision GTAO paper: https://www.activision.com/cdn/research/s2016_pbs_activision_occlusion.pptx
const _temporalRotations = [ 60, 300, 180, 240, 120, 0 ];
const _spatialOffsets = [ 0, 0.5, 0.25, 0.75 ];
// Extended temporal sampling patterns for better temporal distribution and reduced ghosting
// Original values from Activision GTAO paper: https://www.activision.com/cdn/research/s2016_pbs_activision_occlusion.pptx
// Doubled from 6 to 12 rotations and 4 to 8 offsets for better noise distribution across frames
// More rotation angles and spatial offsets improve temporal accumulation and reduce structured artifacts
const _temporalRotations = [ 60, 300, 180, 240, 120, 0, 90, 270, 30, 150, 210, 330 ];
Copy link
Collaborator

@Mugen87 Mugen87 Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor feedback: The linked paper has no section 4.5.

The next thing is that the paper only refers to 6 values of temporal rotation values not 12. There is no mentioning in the paper that doubling or in general increasing the number of rotation values beyond the default is preferable. So I'm not sure on what the AI is referring here. Since I see no improvements in the SSGI and the GTAO using these values, I vote to stick to the original reference.

I'll try to check the TAA suggestions of Claude the next days 👍 . The TAA seems to be improved a bit but I'd like to pinpoint (and understand) what change makes the biggest difference and verify the changes according to the resources.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!

const _spatialOffsets = [ 0, 0.5, 0.25, 0.75, 0.125, 0.625, 0.375, 0.875 ];

let _rendererState;

Expand Down Expand Up @@ -338,8 +341,8 @@ class SSGINode extends TempNode {

const frameId = frame.frameId;

this._temporalDirection.value = _temporalRotations[ frameId % 6 ] / 360;
this._temporalOffset.value = _spatialOffsets[ frameId % 4 ];
this._temporalDirection.value = _temporalRotations[ frameId % _temporalRotations.length ] / 360;
this._temporalOffset.value = _spatialOffsets[ frameId % _spatialOffsets.length ];

} else {

Expand Down
72 changes: 62 additions & 10 deletions examples/jsm/tsl/display/TRAANode.js
Original file line number Diff line number Diff line change
Expand Up @@ -417,9 +417,15 @@ class TRAANode extends TempNode {
const closestDepth = float( 1 ).toVar();
const farthestDepth = float( 0 ).toVar();
const closestDepthPixelPosition = vec2( 0 ).toVar();
const colorSum = vec4( 0 ).toVar();
const colorSqSum = vec4( 0 ).toVar();
const sampleCount = float( 0 ).toVar();

// sample a 3x3 neighborhood to create a box in color space
// clamping the history color with the resulting min/max colors mitigates ghosting
// Sample a 3x3 neighborhood to create a box in color space
// Also compute mean and variance for variance clipping
// Reference: "An Excursion in Temporal Supersampling" by Marco Salvi (GDC 2016)
// https://www.gdcvault.com/play/1023521/An-Excursion-in-Temporal-Supersampling
// Salvi describes computing first and second moments from neighborhood samples

Loop( { start: int( - 1 ), end: int( 1 ), type: 'int', condition: '<=', name: 'x' }, ( { x } ) => {

Expand All @@ -431,6 +437,13 @@ class TRAANode extends TempNode {
minColor.assign( min( minColor, colorNeighbor ) );
maxColor.assign( max( maxColor, colorNeighbor ) );

// Accumulate for variance calculation
// Reference: Salvi (2016) - "Variance clipping requires computing first and second moments"
// E[X] = mean, E[X²] = second moment, Var(X) = E[X²] - E[X]²
colorSum.addAssign( colorNeighbor );
colorSqSum.addAssign( colorNeighbor.mul( colorNeighbor ) );
sampleCount.addAssign( 1.0 );
Copy link
Collaborator

@Mugen87 Mugen87 Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was curious about these changes so I've dig into today 🤓.

First of all some resources Claude mentions do not really reflect what the AI changes. It was necessary to read the resources and watch the presentations the AI is referring to realize this.

In any event, the suggested variance clipping from the NVIDIA talk seems to be a good replacement for the default color clamping. I would not mix the values with the traditional color box but just use the variance values like NVIDIA suggests in the GDC presentation.

Also using the magnitude of the motion vector and the world position difference to influence the weight of the current sample does make sense (using motion vectors to influence the blending has been mentioned in the INSIDE presentation). The idea is to give the current sample more weight if a lot of motion is detected. This change makes actually the biggest difference so we should make use of it. However, I'm not sure where the AI gets the multiplication factors (0.5, 10 and 0.25) from. They are not mentioned anywhere in the resources. However, because of the good results I think we should document these values has been suggested by Claude, use them and see how it goes.

I'm unsure about the new constants for the Disocclusion check. These constants are mentioned nowhere in the resources as well and I don't see a noticeable difference between new and old.

@zalo What do you think about this? The world position difference threshold is now considerably larger (0.5), the value for the depth a bit lower (0.00001).

I can make a PR based on this one and update the code and comments according to my findings. I'll wait for @zalo's feedback though.

Copy link
Contributor

@zalo zalo Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the way to visualize the disoccluded regions is to put this at the end of TRAANode.js:

			If( strongDisocclusion, () => {
				smoothedOutput.assign(vec4(float(1.0), float(0.0), float(0.0), float(1.0)));
			} );

or, before this PR, use:

			If( rejectPixel, () => {
				smoothedOutput.assign(vec4(float(1.0), float(0.0), float(0.0), float(1.0)));
			} );

Comparing the two, it looks like it significantly changed the behavior of the disocclusion algorithm, so it needs a larger threshold...

This video shows before the PR first, and then after:

Recording.2025-10-28.151414.mp4

Should the motion vectors with the TRAA jitter be adding enough motion to trigger a 0.05 unit change? Seems big... I like the idea of using the motion vectors better though 😅

I'll admit I was never thrilled with the worldspace-threshold anyway, since the optimal value varies depending on the size of the scene (how do outline shaders solve this?), but some time should be spent messing with this debug view just to ensure that it's doing what we expect...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, unfortunately, the reference Karis (2014) Section 3.2 - "Disocclusion handling" doesn't actually have a section 3.2 or anything on disocclusion handling... :-(

It seems to be referring to this:
https://advances.realtimerendering.com/s2014/#_HIGH-QUALITY_TEMPORAL_SUPERSAMPLING
(Presentation as .pdf, Youtube Video Link )

I like Claude and tried to use it to implement this disocclusion thing four or five times before I had to admit defeat and buckle down and figure out the right way to pass around this depth buffer manually 💀 )


const currentDepth = depthTexture.sample( uvNeighbor ).r.toVar();

// find the sample position of the closest depth in the neighborhood (used for velocity)
Expand Down Expand Up @@ -461,9 +474,27 @@ class TRAANode extends TempNode {
const currentColor = sampleTexture.sample( uvNode );
const historyColor = historyTexture.sample( uvNode.sub( offset ) );

// clamping
// Variance-based color clamping (reduces ghosting better than simple AABB min/max)
// Reference: "An Excursion in Temporal Supersampling" by Marco Salvi (GDC 2016)
// https://www.gdcvault.com/play/1023521/An-Excursion-in-Temporal-Supersampling
// Variance clipping rejects outlier history samples more effectively than simple clamping

// Compute mean and standard deviation of the 3x3 neighborhood
// Using variance: Var(X) = E[X²] - E[X]²
const colorMean = colorSum.div( sampleCount );
const colorVariance = colorSqSum.div( sampleCount ).sub( colorMean.mul( colorMean ) );
const colorStdDev = max( vec4( 0.0001 ), colorVariance ).sqrt();

const clampedHistoryColor = clamp( historyColor, minColor, maxColor );
// Clamp history to mean ± gamma * stddev
// Gamma = 1.25 provides good balance between ghosting reduction and flickering
// As recommended in Salvi's GDC 2016 presentation
// Lower gamma = tighter clipping = less ghosting but more flickering
// Higher gamma = looser clipping = more ghosting but less flickering
const gamma = float( 1.25 );
const varianceMin = colorMean.sub( gamma.mul( colorStdDev ) );
const varianceMax = colorMean.add( gamma.mul( colorStdDev ) );

const clampedHistoryColor = clamp( historyColor, varianceMin, varianceMax );

// calculate current frame world position

Expand All @@ -481,15 +512,36 @@ class TRAANode extends TempNode {
// calculate difference in world positions

const worldPositionDifference = length( currentWorldPosition.sub( previousWorldPosition ) ).toVar();
worldPositionDifference.assign( min( max( worldPositionDifference.sub( 1.0 ), 0.0 ), 1.0 ) );

const currentWeight = float( 0.05 ).toVar();
// Adaptive blend weights based on velocity magnitude and world position difference
// Reference: "High Quality Temporal Supersampling" by Brian Karis (SIGGRAPH 2014)
// https://advances.realtimerendering.com/s2014/index.html#_HIGH-QUALITY_TEMPORAL_SUPERSAMPLING
// Uses velocity to modulate blend factor to reduce ghosting on moving objects
const velocityMagnitude = length( offset ).toVar();

// Higher velocity or position difference = more weight on current frame to reduce ghosting
// This combines motion-based rejection with disocclusion detection
const motionFactor = max( worldPositionDifference.mul( 0.5 ), velocityMagnitude.mul( 10.0 ) ).toVar();
motionFactor.assign( min( motionFactor, 1.0 ) );

// Base current weight: 0.05 (low motion) to 0.3 (high motion)
// The 0.05 base preserves temporal stability while 0.3 max prevents excessive ghosting
// Reference: "Temporal Reprojection Anti-Aliasing in INSIDE" by Playdead (GDC 2016)
// https://www.gdcvault.com/play/1022970/Temporal-Reprojection-Anti-Aliasing-in
const currentWeight = float( 0.05 ).add( motionFactor.mul( 0.25 ) ).toVar();
const historyWeight = currentWeight.oneMinus().toVar();

// zero out history weight if world positions are different (indicating motion) except on edges

const rejectPixel = worldPositionDifference.greaterThan( 0.01 ).and( farthestDepth.sub( closestDepth ).lessThan( 0.0001 ) );
If( rejectPixel, () => {
// Edge detection for proper anti-aliasing preservation
// Edges need special handling to preserve anti-aliasing quality
// Reference: "A Survey of Temporal Antialiasing Techniques" by Yang et al. (2020)
// https://www.elopezr.com/temporal-aa-and-the-quest-for-the-holy-trail/
const isEdge = farthestDepth.sub( closestDepth ).greaterThan( 0.00001 );

// Disocclusion detection: Reject history completely on strong disocclusion (but preserve edges)
// Disocclusion occurs when previously hidden geometry becomes visible
// World position difference > 0.5 indicates likely disocclusion or fast motion
const strongDisocclusion = worldPositionDifference.greaterThan( 0.5 ).and( isEdge.not() );
If( strongDisocclusion, () => {

currentWeight.assign( 1.0 );
historyWeight.assign( 0.0 );
Expand Down