Description
Description
Context:
Currently, three.js renders opaque objects first, followed by transparent ones. In some scenarios, it's necessary to insert custom post-processing after the opaque pass but before the transparent pass, especially when effects depend on a clean depth buffer.
Use case:
In my 3D scene, I’m implementing effects like atmospheric scattering and water surface masking. These effects rely heavily on the depth buffer from opaque geometry.
However, since the transparent pass may overwrite portions of the depth buffer (and may not write depth at all), trying to apply such effects after the full scene render produces incorrect results — e.g., atmospheric scattering bleeding through transparent objects, or water masks being misaligned.
Current workaround:
Right now, I insert a dummy opaque object with a high renderOrder, and use its onBeforeRender callback to:
Apply the post-processing logic while the depth buffer is still intact.
Copy the processed result back to the default framebuffer (to avoid breaking the transparent pass).
Handle MSAA, preserve color and depth buffers, and ensure compatibility with the renderer's internals.
This approach is functional, but fragile and overly complex. It depends on internal behavior and is potentially non-portable across renderers.
WebGPU concern:
Currently I’m using WebGLRenderer, but I’m unsure if this workaround will work with WebGPURenderer, where render passes and command buffers are handled differently and more strictly. This increases the need for an official and stable insertion point in the rendering pipeline.
Solution
Introduce a proper way to hook into the rendering pipeline after opaque objects are rendered, but before any transparent ones. This could be:
A new callback such as onAfterOpaquePass(renderer, scene, camera), ensure compatibility across renderers,
Alternatives
Additional context
No response