Skip to content

TristanS-A/DistortionSystem

Repository files navigation

Depth Based Distortion System

Tristan Schonfeldt-Aultman

Distortion

Overview:

This system is a manner of adding depth respecting distortion as a post processing effect to a game or application. The basic idea is that distortion is simulated in worldspace to respect depth, meaning that if an object generating distortion is behind an object in the foreground, then the distortion does not affect that foreground object when applying the post processing effect. Conversely, the distortion still affects and distorts anything behind the object generating the distortion. Due to the nature of a post processing effect being applied only after the scene has been rendered, some extra steps are needed to have a post processing effect only apply to some objects and not others based on the relative depth of the origin of the effect.

Implimentation:

This system allows for different render layers to draw different distortion patterns which can easily be added and configured in the main distortion manager script. Additionally, the power of the distortion on each individual object, as well as other uniforms can be set at runtime to create dynamic distortion effects. All of these distortion layers will automatically be depth respecting when processing the final distortion texture due to how the system functions. Additionaly, there is a layer that objects can be set to that will cause them to always not be affected by distortion.

image

Goals and Intentions:

The original inspiration for this system came from the depth based distortion effects used in Metroid Prime 4, where projectiles shot by Samus’s arm cannon would have a distortion field surrounding them that would distort objects behind the projectile. However, when that projectile itself traveled behind an object, the distortion would no longer be visible along with the projectile. This supposed use of a post processing effect that only executed on a texture of the scene after the full scene has been rendered to simulate an effect that looks like it respects the spatial positions of objects in the world was the driving curiosity behind this project/system. An additional goal for this project was also to turn this system into an easy to use tool for adding this effect to applications and would have implementations for different render pipelines and game engines but this got put on the backburner to complete and polish the actual system.

Technical Highlights:

The core implementation of this system is not too terribly complex, where the distortion texture used in the post processing effect is constructed by rendering additional objects in world space to a render texture using the same depth buffer that was used to render the scene. In other words, the scene is rendered first, then additional objects such as a plane or sphere are then rendered with a given shader to draw specific distortion data to a render texture using the same depth buffer that first rendered the scene. This reuse of the depth buffer to render the distortion data objects naturally occludes distortion data that is trying to be rendered while behind another object, therefore creating a depth distortion texture that appears to respect scene depth based on where the distortion is originating from. However, while this initial implementation is the core of the system, it on its own produces artifacts and unrealistic behavior that needs to be addressed.

Challenges:

There were three main issues that arose with the initial implementation of this system, the first being that even though the distortion data itself was being rendered based on the depth of the scene, due to the nature of how distortion is processed over a texture, objects in the foreground were still being sampled from when the distortion was applied to the final scene texture. This looks a bit strange on its own, where a foreground object is rendered normally, but a “copy” of that object appears to be distorted behind itself. This is the core issue that plagues most implementations of depth respecting distortion, and most solutions revolve around specific use cases where certain objects are selectively chosen to be affected by distortion or not. For example, in a heat distortion effect, objects might be distorted if they are past a certain distance from the camera, but not if they are within that distance.

image

To avoid specific use case solutions and create a more general solution to this problem, this system takes a different approach. Rather than deciding what objects should be affected by distortion, this system works out what fragments should be sampled from when applying distortion. To accomplish this, the depth information from drawing the distortion data is compared against the depth of the scene at the fragment that is being sampled from based on the distortion texture. Basically, when a fragment is to be sampled from when processing the distortion, the scene depth of the fragment sampled from the distorted UV cords is compared against the depth of the distortion objects using the original fragment sampled from undistorted UV coords. If the scene depth is closer to the camera at those offset UV coords, as opposed to the depth of the distortion object, then that fragment is not sampled from in the distortion.

However, this causes the issue of there being an obvious hard cut in the resulting distortion where an object that is in the foreground would be. To fix this, a solution was conceptualized to incrementally sample from decreasingly offset UV coordinates on the same distortion vector, effectively backtracking along that vector (I imagine this is sort of like reverse ray marching in 2D). This fills those hard cut areas of no distortion with new distortion, dynamically sampled from fragments that are behind the distortion object, creating a convincing alternative distortion that blends with the surrounding distortion.

image

There was one problem with this method of sampling distortion though. Due to the sampling of the distortion being of a floating point offset, if the render textures were sampled using bilinear interpolation, then some of the texture sampling of the depth data, as well as the color data, would interpolate slightly with the neighboring pixels. This would cause the edge of a foreground object to have a single pixel's worth (though technically more like a fraction of a pixel’s worth) of color or depth data influencing the resulting distortion, creating strange artifacts one pixel in length of the foreground object in the resulting distortion. While switching to point filtering effectively removed this issue, it also made the distortion effect look much less smooth and more obviously pixeley. By some stroke of luck though, I was able to come up with a method to isolate where this bilinear interpolation influence was occurring, by using a step function with an absalom edge value of 0.01 to tell if the depth value (of the distortion objects) being sampled from without the distortion offset was still actually greater than the depth value that is sampled from with the distortion offset. After that, if bilinear interpolation influence is detected, an additional offset of one pixel is simply added to the offset UV coords being used to distort the main scene texture to solve this issue.

image image

Finally, some additional challenges that needed to be addressed were that distortion data objects being rendered on top of other distortion objects would override the distortion data in the final distortion texture, which would cause unnatural edges in the final image. To fix this, specific Blend Ops needed to be set in the shaders that would draw distortion data, such as being additive (one one) instead of only taking the new destination color. Additionally, since the depth information of these distortion objects also needed to be recorded for the processing of the final distortion texture, the depth of these objects needed to be stored somewhere. However, since these shaders can not write to the attached depth buffer because it is being used for rendering based only on the scene’s depth, the depth information needs to be calculated and stored in a separate buffer. To accomplish this, MRT is used to simultaneously render the depth information and distortion data to separate textures. Though, since the specified blend ops for the shader are set to be additive, different blend ops have to be specified for the rendering of depth information, since this should not be additive. In fact, since depth writing is turned off for these shaders, and the depth of the closest object should overwrite the depth of farther objects, the Max Blend Op is also used to properly calculate and store depth information. These blend ops are the reason why the distortion data and depth need to be stored in separate render textures unfortunately, though by using a max function on a specific channel of the distortion data render texture, you could probably properly store the depth data in the z or w component of the render data texture so you only need one render texture and no MRT. Though this would require both reading and writing to the same texture in this shader which might pose some issues.

Future Plans:

Since as mentioned previously how turning this system into an easy to use tool for multiple render pipelines and engines got put on the backburner, this would most definitely be the next steps for this project. Additionally, optimizations could also be focused on to make this system potentially run faster and use less memory. For example, since a whopping 6 render textures are being used for this effect. More distortion effects could also be added, and the framework for this is already in place luckily. This project was definitely more complex than it initially seemed, and I am quite proud with how it turned out!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors