Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SsaoPass: artifacts with depth buffer #17865

Closed
5 of 8 tasks
throni3git opened this issue Nov 4, 2019 · 17 comments
Closed
5 of 8 tasks

SsaoPass: artifacts with depth buffer #17865

throni3git opened this issue Nov 4, 2019 · 17 comments
Labels

Comments

@throni3git
Copy link
Contributor

throni3git commented Nov 4, 2019

Description of the problem

The SSAO example shows several artifacts when i change the camera settings. In particular, the camera near clipping plane value is critical therefore. For a project of mine i need a small camera near value, but when setting up a useful value z-fighting artifacts occure. Both on the beauty and the ssao output mode.

artifacts

When switching off the depthTexture of beautyRenderTarget in SSAOPass.js, the beauty output mode delivers a useful rendering again, but then the depth information is missing of course.

	this.beautyRenderTarget = new THREE.WebGLRenderTarget( this.width, this.height, {
		minFilter: THREE.LinearFilter,
		magFilter: THREE.LinearFilter,
		format: THREE.RGBAFormat,
		// depthTexture: depthTexture,
		// depthBuffer: true
	} );

Another attempt was to use the logarithmic depth buffer on THREE.WebGLRenderer, but the reimplementation 2f9bd63 seems not to contain a normalization regarding the logarithmic nature of the depth buffer.

I'd like to know how to achieve a good effect with the logarithmic depth buffer. What is the data type and the value range of the depth texture and why is only the x (or red) channel of interest. Next i'd like to know why the rendering quality seems to increase when i switch off the depthTexture (as seen in code above) for the beauty pass?

Three.js version
  • r110
  • r103
Browser
  • All of them
  • Chrome
  • Firefox
OS
  • Windows
  • macOS
  • Linux (on my linux installation on the same intel hd-gpu powered laptop there are no issues)
Hardware Requirements (graphics card, VR Device, ...)

Depth / Stencil Bits: [24, 8]
extensions WEBGL_depth_texture and EXT_frag_depth are available

@Mugen87
Copy link
Collaborator

Mugen87 commented Nov 5, 2019

Can you please share you camera parameters in this topic?

@Mugen87 Mugen87 added the Addons label Nov 5, 2019
@throni3git
Copy link
Contributor Author

camera = new THREE.PerspectiveCamera( 65, window.innerWidth / window.innerHeight, 0.1, 700 );, while original camera near clipping value was 100.

@Mugen87
Copy link
Collaborator

Mugen87 commented Nov 6, 2019

When switching off the depthTexture of beautyRenderTarget in SSAOPass.js, the beauty output mode delivers a useful rendering again, but then the depth information is missing of course.

It seems the mentioned effect of using a depth texture is first issue to solve and unrelated to SSAO. I've create a live example that demonstrate the precision issue here: https://jsfiddle.net/u06qpbgz/

As you can see, as soon as you disable the depth texture, the most obvious glitches go away. Although certain box intersections are still a bit shaky.

@throni3git
Copy link
Contributor Author

Yes, this is unrelated to SSAO, only related to the usage of a renderTarget. The little sharkfins when switching off the depthTexture are ok considering the camera near value.

@throni3git
Copy link
Contributor Author

I solved the depth issue by rendering the depth data to a separate render target, using unpackRGBAToDepth and packDepthToRGBA from packing.glsl.

Anyway, should i create a new issue for the beauty output related problem?

@Mugen87
Copy link
Collaborator

Mugen87 commented Feb 19, 2020

After some investigation I think I know what's causes this issue.

The problem is that in WebGL 1, you can only use DepthTexture with THREE.UnsignedShortType or THREE.UnsignedIntType. This enables only 16 bits or precision whereas RGBA depth packing enables 32 bits. That explains the difference in quality and the mentioned artifacts.

So when using DepthTexture with WebGL 1, you have the advantage of better performance compared to RGBA depth packing which requires an additional render pass. However, the quality of the depth buffer depends very sensitively on your frustum configuration/camera parameters.

As showed in the following fiddle, you can use a WebGL 2 rendering context instead and then use THREE.FloatType for DepthTexture.type. This brings back 32 bits of precision.

https://jsfiddle.net/gt1h5k8L/

Since the usage of DepthTexture is limitated by WebGL 1, I think there is not much we can do here. Using WebGL 2 and THREE.FloatType seems the best solution.

@vanruesc
Copy link
Contributor

vanruesc commented Mar 4, 2020

@Mugen87 I found that setting the format of the depth texture to DepthStencilFormat and the type to UnsignedInt248Type also works. This doesn't require WebGL2:

https://jsfiddle.net/xkog8r93/

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 4, 2020

Does this have no side effects? See #8596 and #8597.

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 4, 2020

BTW: UNSIGNED_INT_24_8_WEBGL enables only a 24-bit depth texture whereas float enables 32-bit. so it's not equivalent.

@vanruesc
Copy link
Contributor

vanruesc commented Mar 4, 2020

Does this have no side effects?

I haven't encountered any issues with this depth texture setup. As far as I can tell, WebGLTextures already contains code for handling DepthStencilFormat and it also seems to handle stencil buffers correctly.

Stencil + Depth Texture support was added in #9368 which came after #8596 but didn't close or reference the latter, although @Bryce-Summers reported that this PR fixed his CSG problem. Depth/Stencil management was also further enhanced by #9774.

I'll check again if there are any problems with using stencil + depth texture and will report back later.

UNSIGNED_INT_24_8_WEBGL enables only a 24-bit depth texture whereas float enables 32-bit. so it's not equivalent.

That's true. I can only speak for myself, but to me 24-bit depth appears to be sufficient.

@vanruesc
Copy link
Contributor

vanruesc commented Mar 4, 2020

The problem is that in WebGL 1, you can only use DepthTexture with THREE.UnsignedShortType or THREE.UnsignedIntType. This enables only 16 bits or precision whereas RGBA depth packing enables 32 bits.

This flew over my head: why can't we use UnsignedIntType? https://jsfiddle.net/raqxznfj/

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 4, 2020

Stencil + Depth Texture support was added in #9368 which came after #8596 but didn't close or reference the latter, although @Bryce-Summers reported that this PR fixed his CSG problem.

Thanks for this bit. I suppose #8596 was just overlooked. Let's close it now.

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 4, 2020

why can't we use UnsignedIntType

From the spec:

As per the OpenGL ES spec, there is no guarantee that the OpenGL ES implementation will use the texture type to determine how to store the depth texture internally. It may choose to downsample the 32-bit depth values to 16-bit or even 24-bit.

Interesting! So it seems depending on the implementation, you might get more precision with certain texture types on certain platforms. However, using a floating point texture seems the most reliable way to achieve 32-bit depth precision.

@vanruesc
Copy link
Contributor

vanruesc commented Mar 4, 2020

I tested the Stencil Buffer + Depth Texture feature and can confirm that everything works as expected. I also tested UnsignedInt248Type vs UnsignedIntType with a depth of field effect and a near plane setting of 0.3 and couldn't discern a difference.

However, using a floating point texture seems the most reliable way to achieve 32-bit depth precision.

Good point; as long as WebGL 2 is supported on the device, 32-bit FloatType is guaranteed to be available. I feel like three shouldn't care about that, though. If a device gives you less precision than was legitimately asked for, it should just fail or perform poorly if it really wants to.

The spec also says that UNSIGNED_INT_24_8_WEBGL guarantees 24 bits for depth which appears to be enough to combat the precision problem without having to rely on full WebGL 2 support.

I think UnsignedIntType is the way to go. And moving from WebGL 1 to WebGL 2 with UnsignedIntType would just work without having to switch to FloatType. Falling back to 24-bit also doesn't sound so bad.

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 4, 2020

I'm not sure about this statement and would prefer to do some tests with mobile devices first. According to other precision issues in this repo (e.g. mediump vs. highp), smartphones or tablets are the real troublemakers.

Right now, I tend to go for WebGL 2 and FloatType if it's available on the platform.

@vanruesc
Copy link
Contributor

vanruesc commented Mar 4, 2020

Sure, that sounds reasonable. My intention is just to provide another opinion on the matter.

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 24, 2020

The actual issue that's causing the mentioned artifacts is now well understood. The optimal parameterization with best performance and quality depends on the specific use case. Closing for now since the engine can't make this choice automatically for the application.

@Mugen87 Mugen87 closed this as completed Mar 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants