Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for reverse-z depth buffer #3539

Closed
roalyr opened this issue Nov 12, 2021 · 70 comments · Fixed by godotengine/godot#88328
Closed

Add support for reverse-z depth buffer #3539

roalyr opened this issue Nov 12, 2021 · 70 comments · Fixed by godotengine/godot#88328
Labels
breaks compat Proposal will inevitably break compatibility topic:rendering topic:3d
Milestone

Comments

@roalyr
Copy link

roalyr commented Nov 12, 2021

Reverse-z buffer solution proposed

Khasehemwy/godot@0a15540

Related issues

Issues related to extended z-far that I have encountered:
godotengine/godot#86275
roalyr/godot-for-3d-open-worlds#15

Describe the project you are working on

I am working on a space game GDTLancer, which is designed to have large-scale objects at large distances (of the same magnitude as camera far plane distance maximum value) which can be reached and interacted with, such as stellar bodies, megastructures and other constructions that will be visible from far away, which means that camera far plane is set to be at a great distance (Up to 1e15-1e18 units, in my case).

Describe the problem or limitation you are having in your project

Large-scale spaces and rendering of thereof are limited by floating-point precision issues. While such things as jitter could be tackled by floating origin, things such as z-fighting for faraway objects require a solution that addresses the way things are rendered.
Since the above-mentioned objects are meant to be interacted with, a double-viewport approach is not viable.

Describe the feature / enhancement and how it helps to overcome the problem or limitation

Initial solution was to implement logarithmic depth buffer since in GLES backend we had to deal with -1, +1 range.
After 4.x came out we have moved onto Vulkan backend, wherein a better solution can be implemented - a reverse-z depth buffer:
https://developer.nvidia.com/content/depth-precision-visualized

Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

A workaround solution for logarithmic depth buffer that was used before (breaks optimization, depth-related effects and shadows

A gdshader solution for lagarithmic depth buffer is as follows:

// Add this before your vertex shader.
// Edit "Fcoef" to adjust for desirable view distance. Lesser number means further distance limit.
uniform float Fcoef = 0.001;
varying float gl_Position_z;

// Add this to your vertex shader.
void vertex()
{
	vec4 gl_Position = MODELVIEW_MATRIX*vec4(VERTEX, 1.0);
	gl_Position_z = gl_Position.z;
}

//Add this to your fragment shader.
void fragment()
{
	DEPTH = log2(max(1e-6, 1.0 -gl_Position_z)) * Fcoef;
}

If this enhancement will not be used often, can it be worked around with a few lines of script?

There is no way to work around this issue in a simple way, without compromising depth-related effects and shadow casting.

Is there a reason why this should be core and not an add-on in the asset library?

This is a feature that was already requested and will be requested again in future, because it allows to implement large open-world games in Godot.
Reverse z depth buffer is beneficial for all kinds of 3D projects and comes at no drawbacks for performance.

@Calinou Calinou changed the title Logarithmic z-buffer Add support for logarithmic depth buffer Nov 12, 2021
@Calinou Calinou added this to the 4.x milestone Nov 12, 2021
@roalyr
Copy link
Author

roalyr commented Nov 17, 2021

This is a solution that makes it work so long as you tweak every single spatial material in the game that depends on depth buffer. Works for GLES3, doesn't work for GLES2 (no gl_FragDepth exposed in GLES2).
Short video demonstration of logarithmic depth and absence of z-fighting (ignore clipping in the end, the meshes shown in the video are flat ~100 units in thickness and 10000 units in size).

// For logarithmic depth buffer.
const float c = 0.001; // Tested at 500 M units.
varying vec4 gl_Position;

void vertex()
{
	// For logarithmic depth buffer.
	gl_Position = MODELVIEW_MATRIX*vec4(VERTEX, 1.0);
}

void fragment()
{
	// Logarithmic depth buffer.
	DEPTH = log2(max(1e-6, 1.0 -gl_Position.z)) * c;

	// Remaining frag code below ...

Depth draw can be anything, depth testing must be enabled.

It works, but it breaks shaders that rely on reading and decoding depth into linear value (see .glsl files in source code that have depth referenced. Shaders below should be inspected for misbehavior:
subsurf_scattering, ssao_minify, ssao_blur, ssr, effects_blur, cube_to_dp.
It also breaks distance fade in spatial material, and the algorithm should be reviewed.

Decoding

Possible decoding formula for above-mentioned shaders could be:
linear_depth = pow(2,(depth_log + 1.0)/c) - 1.0;
Instead of:
linear_depth = 2.0 * z_near * z_far / (z_far + z_near -depth * (z_far - z_near));
But I didn't test it and it has to be checked (signs and 1.0s in the formula might fit DirectX depth instead of OpenGL, or something like that).

A possible Godot 3.x, 4.x implementation

As I see it, each shader and source code file should be tweaked with condition (an IF statement for some setting flag) that switches to either normal or logarithmic depth buffer in every instance that requires it.
It requires gl_FragDepth to be exposed and written, so it can be implemented for GLES3 driver quite... trivially (as long as one is careful to catch all instances and knows how to implement switches properly that can be linked to a project options entry).
Not sure about Vulkan, but if it makes use of .glsl files then it should be doable there too?

@roalyr
Copy link
Author

roalyr commented Nov 17, 2021

It may also be reasonable to allow greater camera_far maximim value (say, 10M? 100M?) in such a case, depending on what size depth texture is. In some of the links above it was mentioned that 24 bit is enough for planetary scale logarithmic buffer, and 32 - for cosmic-scale (I think).

Alas it is hard for me to judge how many things depend on camera_far and whether that will break anything else.

Why? Well, that way you can start making real space simulators like Orbiter, with everything truly up to scale.

It is but a fanciful idea, but maybe it could be a subject of a test, by, say, making a sub-option when enabling logarithmic depth globally, to adjust maximum camera_far value cap.

@Zylann
Copy link

Zylann commented Nov 18, 2021

a double-viewport approach is not viable.

I have a similar project with large scales and interactive planets, and this is something I tried. It actually worked without too much effort, interaction was not a problem. One transparent viewport renders close stuff without background, while the other renders the rest and background. Both were then composed into one. This did not require duplicating the whole world, I only had to assign the same world to both viewports. Only one viewport actually had the nodes and stuff in it. (similar to how split-screen is achieved)
The issues I encountered with this approach were mainly about depth-based post-processing (atmospheres), because some information gets lost in the final compositor (depth buffers) so each viewport needed to apply them doubling their cost, and there was a few issues with the "frontier" between the two clips, where a seam would sometimes become visible with MSAA, fog or transparent stuff. I also did not measure how performance was but I assume it was lower. I turned off this technique for now.

Either way, I'm still interested in having an easy way to render far away stuff because I still have Z-precision issues when rendering this stuff.

@roalyr
Copy link
Author

roalyr commented Nov 19, 2021

Unfortunately, double-viewport is not very viable on low-performance hardware.

@roalyr
Copy link
Author

roalyr commented Nov 23, 2021

I have re-compiled godot with 100M units far plane limit, and all seems to work fine. yes, one may want to implement some script-based culling but it is working well.

Alas, light processor must be adjusted for extreme Far-Near ranges due to godotengine/godot#55070 (comment) (see workaround demo)

@roalyr
Copy link
Author

roalyr commented Nov 23, 2021

In the shader code here, I have changed this:

const float c = 0.001;

This way it will work with any reasonable camera far value (tested with 500 M units).

@reduz
Copy link
Member

reduz commented Feb 24, 2022

Some notes about this:

  • Writing to depth manually disables early fragment tests, this has a very significant impact on performance.
  • Even if you did this, if you have objects very far away they will run into floating point precision errors as-is. We would need to add better support for double precision in the vertex shader all around.

@Zylann
Copy link

Zylann commented Feb 24, 2022

Even if you did this, if you have objects very far away they will run into floating point precision errors as-is. We would need to add better support for double precision in the vertex shader all around.

I thought if objects were very far away they would actually not be bothered by imprecision with small values because they are far? (only Z-fighting of close surfaces on the far object is a problem for me, but it can be solved easily with a LOD merging these faces into one)

@roalyr
Copy link
Author

roalyr commented Mar 6, 2022

Vertex depth suffers from lack of proper interplation which causes artifacts on large faces.
So far, fragment shader implementation seems to be the most reliable. Tested at least withing 0.01 - 1e15 units (roughly, couldn't test any more because shit happened).

As far as I know there's no reliable way to do proper fast vertex depth, and sources pointed out that possible soultion might involve adaptive tesslations, which, in fact, may not be all that good.

Single precision fragment shader depth does it job quite well, I couldn't test and see where double precision might come to be necessary as of yet.

@roalyr
Copy link
Author

roalyr commented Mar 6, 2022

Here is my latest implementation (3.x):
https://github.com/roalyr/godot-for-3d-open-worlds

@roalyr
Copy link
Author

roalyr commented Mar 28, 2022

Will drop it here in case anyone can assist: roalyr/godot-for-3d-open-worlds#6

@KeyboardDanni
Copy link

Why not simply reverse the Z? It should avoid the drawbacks associated with logarithmic Z while still improving Z-fighting considerably: https://developer.nvidia.com/content/depth-precision-visualized

I removed the log2 from the fragment code above and ended up with the following:

DEPTH = (max(1e-6, 1.0 -gl_Position.z)) * c;

This seems to work well for what I need. I'm not making a space game, but I still see Z-fighting from less than 300m away without this method. As much as I want to use this, anything with the default shader will appear in the back, and I can't find a way to change said default for when an object doesn't have a material applied.

@Calinou
Copy link
Member

Calinou commented Nov 24, 2022

I suppose reverse Z may be problematic for games that have models close to the viewer (such as first-person weapon models not rendered in a separate viewport).

@KeyboardDanni
Copy link

KeyboardDanni commented Nov 24, 2022

I'd think that for most cases, there's already more than enough Z buffer precision for objects close to the camera with regular Z that reversing Z wouldn't introduce significant issues.

It should probably be an option though, especially since it would likely break existing custom shaders.

@KeyboardDanni
Copy link

So I decided to try this out and it looks like there is indeed some precision loss for close objects. Here is a surface that is placed 0.001m above the other, and rotated 0.1 degrees. Curious to see if increasing depth buffer precision can counteract this.

With normal Z:
NormalZClosePrecision

With reversed Z:
ReverseZClosePrecision

@KeyboardDanni
Copy link

Another resource on reversed Z: https://thxforthefish.com/posts/reverse_z/

A couple precision-related questions I have:

  • How much of the near-camera precision woes here are caused by int/float conversions between the shader and the depth buffer format?
  • Is it possible to increase depth buffer precision in Godot, maybe to 32-bit float?

@roalyr
Copy link
Author

roalyr commented Nov 30, 2022

I have worked around the FPS drop in case of Log depth by making some shaders use vertex lighting.
I've looked into reversed Z, but the implementation seems rather tricky, and I am not sure it will be enough for my needs.

Maybe there is a better way to implement and optimize it all exists for Vulkan since it has depth range of 0.0 - 1.0, and maybe with reversed-z and more precision it could be a good compromise solution.

But Log depth is still a very valid option, despite its drawbacks.
If possible, it would be very handy to have depth texture encoding variants as project rendering option, but that would require having proper decoders for all the FX shaders and shadow casting.

@Zylann
Copy link

Zylann commented Nov 30, 2022

Is it alternatively possible to have 32-bit depth buffer, instead of 24-bit+stencil packed into the same buffer? My game doesn't need billions of kilometers, but is still large enough so that distant objects are flickering a lot, so a bit more precision (not too much) would be welcome.
I've noticed 32-bit depth was an option in the enums of the renderer, but I haven't figured how this can be enabled. I tried a bold attempt which resulted in a black screen (likely wasnt doing it properly).

@Calinou
Copy link
Member

Calinou commented Nov 30, 2022

Is it alternatively possible to have 32-bit depth buffer, instead of 24-bit+stencil packed into the same buffer?

Last time I checked, 32-bit depth buffer is poorly supported on some GPUs. I'd check integrated graphics and mobile especially, which tend to default to a 16-bit depth buffer unless forced to use a 24-bit depth buffer.

@Zylann
Copy link

Zylann commented Nov 30, 2022

Well I would not make such a game for mobile or low-end devices anyways (large scale isnt the only reason), so there is that

@KeyboardDanni
Copy link

I think serious consideration should be made toward having an option to prefer a 32-bit depth buffer, at the very least.

Regarding hardware support:

In fact, I can't think of a reason why we shouldn't make the depth buffer 32-bit by default on desktop. If support goes back as far as Direct3D 9, one has to wonder why anyone would stick with 24-bit, given that there are depth precision issues only about 200m away, which is smaller than a lot of games even from the 6th gen era of consoles.

This problem is the one remaining wart (aside from shader compilation) that affects general 3D usage in Godot, and it is so easy to fix for most use cases. I don't see why we don't just use 32-bit.

@KeyboardDanni
Copy link

KeyboardDanni commented Dec 4, 2022

Also, here is DXVK falling back to 32-bit depth buffer over 24-bit because 24-bit is less supported on some AMD configurations: https://github.com/doitsujin/dxvk/blob/master/src/dxgi/dxgi_format.cpp#L852

@Zireael07
Copy link

90% hardware support

Is this true for Android too? Or is this just laptop/desktop PCs?

@KeyboardDanni
Copy link

It's the overall total for all listed platforms (Windows, macOS, Linux, Android, iOS). For D32_SFLOAT (i.e. 32-bit with no stencil), Windows, macOS, Linux, and iOS have 99-100% support. Android is a lot less clear-cut at 82%. Those numbers in more detail:

Windows:

macOS:

  • D24_UNORM_S8_UINT: 52%
  • D32_SFLOAT: 100%
  • D32_SFLOAT_S8_UINT: 100%

Linux:

iOS:

  • D24_UNORM_S8_UINT: (no data)
  • D32_SFLOAT: 100%
  • D32_SFLOAT_S8_UINT: 100%

Android:

From this data, it looks like 32-bit depth buffer should be default on desktop, whereas mobile might be better off defaulting to 24-bit with 32-bit fallback.

@Ansraer
Copy link

Ansraer commented Dec 7, 2022

Hey @cosmicchipsocket, did you ever get around to trying reversed z with a 32-bit depth buffer? I personally would really prefer rev. z over a log depth buffer. IMO it is an elegant solution that nicely distributes the available precision between the near and far plane while still being easily understandable for inexperienced shader authors.

@Khasehemwy
Copy link

Great question! In addition, is reversed-z supported on forward+ now?

@roalyr
Copy link
Author

roalyr commented Dec 30, 2023

AFAIK no one implemented anything in source code yet.

But you can implement Log Depth in .gdshader right in the project, so, probably, testing reverse normal buffer can be done like that too?

@clayjohn
Copy link
Member

clayjohn commented Jan 2, 2024

We should implement reverse-z as @Ansraer describes above, this is pretty easy to do in the RD renderers, but the OpenGL renderer won't be able to benefit. For the RD renderer, it shouldn't be too difficult to implement, but it will require changes to a few systems:

  1. Should switch to 32 bit precision by default (if we haven't already)
  2. Need to reverse the projection matrix (using the "correction matrix" which already converts to 0-1 range)
  3. Need to clear depth to 0 instead of 1
  4. need to flip the behaviour of the depth tests (LESS becomes GREATER and vice versa)
  5. Need to adjust the shader compiler to handle reverse z without breaking compatibility. To do so, we need to add compatibility functions when a user reads from the depth texture, or when the user writes to DEPTH manually.

For the compatibility backend, we can just leave it as non-reversed-z because the extension to benefit from reversed-z isn't well supported and since what we expose to users is non-reversed-z there is no point in reversing it for consistency.

@Khasehemwy
Copy link

Thanks @clayjohn for your help! I implemented a Reversed-z engine support (Khasehemwy/godot@0a15540), which can be turned on in the editor's Project Settings - Renderer - Reversed-z. Of course there are probably many flaws, I hope this helps some people.

@roalyr
Copy link
Author

roalyr commented Jan 5, 2024

Thanks @clayjohn for your help! I implemented a Reversed-z engine support (Khasehemwy/godot@0a15540), which can be turned on in the editor's Project Settings - Renderer - Reversed-z. Of course there are probably many flaws, I hope this helps some people.

Does it function already? Can you share a binary to try it out? Does it include the Mobile renderer too?

@Khasehemwy
Copy link

Thanks @clayjohn for your help! I implemented a Reversed-z engine support (Khasehemwy/godot@0a15540), which can be turned on in the editor's Project Settings - Renderer - Reversed-z. Of course there are probably many flaws, I hope this helps some people.

Does it function already? Can you share a binary to try it out? Does it include the Mobile renderer too?

I have tested some samples locally and there is no problem. When the object is far away, the effect is really good and there will be no depth accuracy problem. However, it is likely to be imperfect. For example, if there is a depth comparison of ndc in the built-in shader, it needs to be changed in the case of reversed-z (such as > to <).

There are binaries here (https://github.com/Khasehemwy/godot/releases/tag/reversed-z), but I'm not sure if they can be run elsewhere. I think it would be better to compile the source code locally.

Theoretically, the reversed-z I implemented will be effective when using vulkan (it is written for vulkan code). I am not sure what the difference is between forward mobile and forward+. I haven't tried mobile yet.

@roalyr
Copy link
Author

roalyr commented Jan 6, 2024

@Khasehemwy this is from the dev chat, rendering section:

Ansraer1:05 PM
I personally think that reversed depth is the way to go on modern hardware, especially since it has 0 cost. 
Glancing at the linked commit on my phone I would say that the cpp part looks fine, it properly switches the clear color, 
comparuson ops and ensures a float format is used. What I am a bit more concerned with are our post processing effects. 
Anything that tries to reconstruct the depth will need to be adjusted.

I am also not happy with the fact that this is another ProjectSetting people need to find first. If we implement this for every 
renderer (even compat) we could just have this always enabled. We wouldn't see any performance benefit in opengl 
but at least it would be consistent and as performant as possible on every type of hardware.

However, if we want to get this merged it will need to happen soon. Right now it is very tricky to write your own post 
processing shaders that access depth, so changing the depth buffer shouldn't break barely any user shaders. However, 
that will change once the compositor and hooks have been merged.
Ideally you (or the original author) could create a PR for the feature. I would be happy to review it properly and promote it at 
the rendering meeting next week.

@roalyr
Copy link
Author

roalyr commented Jan 6, 2024

I would appreciate if you make a PR.
Keep the option setting in, because it would be more useful to not to have the thing on by default.

@roalyr
Copy link
Author

roalyr commented Jan 6, 2024

Also adding the issues related to extended z-far that I have encountered so far to make things linked.
godotengine/godot#86275
roalyr/godot-for-3d-open-worlds#15

Also edited the issue to reflect the recent findings.

@roalyr roalyr changed the title Add support for logarithmic depth buffer Add support for logarithmic or reverse-z depth buffer Jan 6, 2024
@roalyr roalyr changed the title Add support for logarithmic or reverse-z depth buffer Add support for reverse-z depth buffer Jan 6, 2024
@Khasehemwy
Copy link

@Khasehemwy this is from the dev chat, rendering section:

Ansraer1:05 PM
I personally think that reversed depth is the way to go on modern hardware, especially since it has 0 cost. 
Glancing at the linked commit on my phone I would say that the cpp part looks fine, it properly switches the clear color, 
comparuson ops and ensures a float format is used. What I am a bit more concerned with are our post processing effects. 
Anything that tries to reconstruct the depth will need to be adjusted.

I am also not happy with the fact that this is another ProjectSetting people need to find first. If we implement this for every 
renderer (even compat) we could just have this always enabled. We wouldn't see any performance benefit in opengl 
but at least it would be consistent and as performant as possible on every type of hardware.

However, if we want to get this merged it will need to happen soon. Right now it is very tricky to write your own post 
processing shaders that access depth, so changing the depth buffer shouldn't break barely any user shaders. However, 
that will change once the compositor and hooks have been merged.
Ideally you (or the original author) could create a PR for the feature. I would be happy to review it properly and promote it at 
the rendering meeting next week.

It would be great if godot could support reversed-z.

It is a good idea to use reversed-z in all APIs, which can simplify the processing of comparisons of different z values.

I plan to add reversed-z support to OpenGL and then make a PR. But no changes will be made to the shader for the time being.

Simply adding support for reversed-z may not take too much time, but I probably won't be able to make changes to all shaders in a short time, which requires a lot of changes.

@clayjohn
Copy link
Member

clayjohn commented Jan 7, 2024

I plan to add reversed-z support to OpenGL and then make a PR. But no changes will be made to the shader for the time being.

There is really no point in adding it to the OpenGL backend. I would just leave it alone and save the effort.

@Khasehemwy
Copy link

I plan to add reversed-z support to OpenGL and then make a PR. But no changes will be made to the shader for the time being.

There is really no point in adding it to the OpenGL backend. I would just leave it alone and save the effort.

OpenGL does not gain improvements from reversed-z, and for general shaders, regardless of whether OpenGL supports reversed-z or not, the two cases of turning on or off reversed-z need to be handled separately. But for a project that uses reversed-z from the beginning, the user's custom shader can save the need to care about non-reversed-z situations.

Okay, let me first change the code of the underlying part of Vulkan into RD, so that I don’t have to change DX12 in the future.

@clayjohn
Copy link
Member

clayjohn commented Jan 7, 2024

Take a step back here. From the user's perspective nothing should change, even in the RD renderer. We aren't going to break shader compatibility.

This is why I said above that we need to insert a compatibility function into the shader compiler. From the user's perspective we will still be using normal depth. Therefore we can continue using normal depth in the compatibility renderer.

It is extremely important that we maintain compatibility when adding this.

An another note, I don't think any changes to RD, Vulkan, or D3D12 should be necessary here. You should be able to do everything within the renderer_rd classes and the shader compiler. If you are touching Vulkan code or D3D12 code that you have gone too far.

@Khasehemwy
Copy link

It is extremely important that we maintain compatibility when adding this.

I have no idea how to maintain compatibility. I can only think of adding a switch in ProjectSettings to control on or off.

I think we have no way of knowing what operations the user has done with depth in the shader. The user can only handle the reversed-z situation by himself.
For example, if the depth map is automatically converted to normal z value when sampling, then the z value of the pixel itself will be difficult to process, because the user can perform matrix transformation by himself. Moreover, users may also use their own defined textures as depth, which is difficult to automatically convert when compiling the code. If the user needs an identifier to indicate that this is something used for depth, it feels better to handle reversed-z directly.

For compatibility, all previous projects can be used after turning off reversed-z (these previous projects should not require reversed-z). For new projects, we can design based on reversed-z from the beginning.

@Ansraer
Copy link

Ansraer commented Jan 7, 2024

Yeah, I have to agree with Khasehemway on this clay. There is no way we can accurately detect when and how the user access depth. And even if we could, we really don't want to. Besides, if the engine automatically un-inverts depth before user code we would loose the addition precision, which means that any user code will produce artifacts.

I think that we really should just rip that band aid off and change it. Trying to avoid compat breaking changes is a noble principle, but in some cases it just generates more technical debt and results in a too high cost for new users to be worth it.
If we leave it optional I can guarantee you that most projects will stick with the default option, which means that going forward new assets and shaders continue to be written for non-inversed depth, which would make the entire situation even worse.

And we really need rev depth, even for non double precision builds. It will probably also solves some z-fighting and precision issues that can already happen at medium distance.

@clayjohn
Copy link
Member

clayjohn commented Jan 7, 2024

Yeah, I have to agree with Khasehemway on this clay. There is no way we can accurately detect when and how the user access depth.

It's really easy actually. We already detect it in our shader precompiler as we need to convert the UV coorda to 3 channels when using XR. All we have to do is insert a wrapper function around all calls to texture(DEPTH_TEXTURE).

To be clear, compatibility breakage is not an option at this stage in our release cycle. Any solution we go with needs to maintain backwards compatibility.

At most, I would add a render mode to reverse depth in user code for users that need to maintain best precision within their shader.

@roalyr
Copy link
Author

roalyr commented Jan 7, 2024

So long as maximum precision option is there - I am absolutely happy!

@KeyboardDanni
Copy link

The big concern I have with a compatibility layer is that this could result in unexpected behavior on the part of shader developers. I also don't know if having an additional level of redirection would hurt shader performance. Plus I'm not sure what the implications would be for the future (would we remove the bandaid in the next major version of Godot? How would we handle that?)

Personally I'd like to see this as a project setting, maybe have it default to reverse Z for new projects and normal Z for migrated ones. This could cause issues for existing resources like older stuff on Godot Shaders though. Also I feel like project settings is already quite crowded with tweakables.

@Khasehemwy
Copy link

To be clear, compatibility breakage is not an option at this stage in our release cycle. Any solution we go with needs to maintain backwards compatibility.

Maintaining compatibility actually means that when using reversed-z, all previous shaders or plug-ins can still be used, right? I think this will be difficult.

If when using reversed-z, all texture(DEPTH_TEXTURE) is automatically changed into normal z in the precompiler, then the z value of the pixel in the fragment shader also needs to be changed. At this time, we can only change the built-in Variables (such as VERTEX) are precompiled to turn reversed-z into normal z.
But if a user-defined variable (such as clip_pos) is passed from the vertex shader to the fragment shader, it will be difficult to handle. At this time, the user is using PROJECTION_MATRIX to get clip_pos, and the z value obtained is reversed, but texture(DEPTH_TEXTURE) will get normal z.
If we change the PROJECTION_MATRIX in the shader to non-reversed-z, and the underlying code uses the reversed-z matrix, it feels like it is not a good idea.


The big concern I have with a compatibility layer is that this could result in unexpected behavior on the part of shader developers.

I also think it would be confusing to find out that the depth texture is reversed-z and what I get from the shader is normal z.

Maybe we can turn off reversed-z by default, because reversed-z is only needed in larger scenes, which is a relatively special requirement. For shader developers, they need to deal with two situations.

@Ansraer
Copy link

Ansraer commented Jan 8, 2024

I agree that I can't see any way to reliably implement some kind of auto conversion in the shader parser. While it should be fairly simple to detect basic use cases and convert them there are too many possible scenarios for us to catch all of them. And you are totally correct that this would also impact the fragment position and the project matrix. I spent some time thinking about this while playing with the vertex shader and really can't come up with a solution I am happy with.

Turning off rev depth in non double builds also isn't an option. The more uniform depth precision means that we also get less z fighting and more stable precisions even at mid distance, so it would be a definite improvement for small to mid sized scenes as well.

I have added this to tomorrow's render meeting schedule, hopefully we can reach some kind of agreement then.

@clayjohn
Copy link
Member

We discussed this earlier in the week in the regular rendering team meeting. The production team also discussed it today in our weekly meeting.

Here are the main results from our discussion:

  1. We can't ensure compatibility in all cases
  2. We can ensure partial compatibility, but that will introduce weird differences in behaviour (especially once we merge the Render hooks PR and allow users to insert custom compute passes into the middle of rendering)
  3. We shouldn't add a project setting as it will fragment resources online (i.e. shaders will become incompatible if the project setting is changed)
  4. As we expose more ability to write custom rendering code the above points are going to become more problematic

The rendering team ultimately concluded that we should either break compatibility now and switch to always using reverse-z. Or we need to wait until Godot 5 and break compatibility then. The Production team agreed that we could go ahead and break compatibility.

Therefore, the next steps for this proposal are to create a PR implementing Reverse-Z:

  1. without compatibility code
  2. without a project setting
  3. in all rendering backends so the compatibility renderer is consistent with the others.

@roalyr
Copy link
Author

roalyr commented Jan 13, 2024

Thank you in advance!

@AThousandShips AThousandShips added the breaks compat Proposal will inevitably break compatibility label Feb 14, 2024
@akien-mga akien-mga modified the milestones: 4.x, 4.3 Apr 4, 2024
@roalyr
Copy link
Author

roalyr commented Apr 4, 2024

Thank you all!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaks compat Proposal will inevitably break compatibility topic:rendering topic:3d
Projects
None yet
Development

Successfully merging a pull request may close this issue.