-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for reverse-z depth buffer #3539
Comments
This is a solution that makes it work so long as you tweak every single spatial material in the game that depends on depth buffer. Works for GLES3, doesn't work for GLES2 (no gl_FragDepth exposed in GLES2).
Depth draw can be anything, depth testing must be enabled. It works, but it breaks shaders that rely on reading and decoding depth into linear value (see .glsl files in source code that have DecodingPossible decoding formula for above-mentioned shaders could be: A possible Godot 3.x, 4.x implementationAs I see it, each shader and source code file should be tweaked with condition (an IF statement for some setting flag) that switches to either normal or logarithmic depth buffer in every instance that requires it. |
It may also be reasonable to allow greater Alas it is hard for me to judge how many things depend on Why? Well, that way you can start making real space simulators like Orbiter, with everything truly up to scale. It is but a fanciful idea, but maybe it could be a subject of a test, by, say, making a sub-option when enabling logarithmic depth globally, to adjust maximum |
I have a similar project with large scales and interactive planets, and this is something I tried. It actually worked without too much effort, interaction was not a problem. One transparent viewport renders close stuff without background, while the other renders the rest and background. Both were then composed into one. This did not require duplicating the whole world, I only had to assign the same world to both viewports. Only one viewport actually had the nodes and stuff in it. (similar to how split-screen is achieved) Either way, I'm still interested in having an easy way to render far away stuff because I still have Z-precision issues when rendering this stuff. |
Unfortunately, double-viewport is not very viable on low-performance hardware. |
I have re-compiled godot with 100M units far plane limit, and all seems to work fine. yes, one may want to implement some script-based culling but it is working well. Alas, light processor must be adjusted for extreme Far-Near ranges due to godotengine/godot#55070 (comment) (see workaround demo) |
In the shader code here, I have changed this:
This way it will work with any reasonable camera far value (tested with 500 M units). |
Some notes about this:
|
I thought if objects were very far away they would actually not be bothered by imprecision with small values because they are far? (only Z-fighting of close surfaces on the far object is a problem for me, but it can be solved easily with a LOD merging these faces into one) |
Vertex depth suffers from lack of proper interplation which causes artifacts on large faces. As far as I know there's no reliable way to do proper fast vertex depth, and sources pointed out that possible soultion might involve adaptive tesslations, which, in fact, may not be all that good. Single precision fragment shader depth does it job quite well, I couldn't test and see where double precision might come to be necessary as of yet. |
Here is my latest implementation (3.x): |
Will drop it here in case anyone can assist: roalyr/godot-for-3d-open-worlds#6 |
Why not simply reverse the Z? It should avoid the drawbacks associated with logarithmic Z while still improving Z-fighting considerably: https://developer.nvidia.com/content/depth-precision-visualized I removed the
This seems to work well for what I need. I'm not making a space game, but I still see Z-fighting from less than 300m away without this method. As much as I want to use this, anything with the default shader will appear in the back, and I can't find a way to change said default for when an object doesn't have a material applied. |
I suppose reverse Z may be problematic for games that have models close to the viewer (such as first-person weapon models not rendered in a separate viewport). |
I'd think that for most cases, there's already more than enough Z buffer precision for objects close to the camera with regular Z that reversing Z wouldn't introduce significant issues. It should probably be an option though, especially since it would likely break existing custom shaders. |
Another resource on reversed Z: https://thxforthefish.com/posts/reverse_z/ A couple precision-related questions I have:
|
I have worked around the FPS drop in case of Log depth by making some shaders use vertex lighting. Maybe there is a better way to implement and optimize it all exists for Vulkan since it has depth range of 0.0 - 1.0, and maybe with reversed-z and more precision it could be a good compromise solution. But Log depth is still a very valid option, despite its drawbacks. |
Is it alternatively possible to have 32-bit depth buffer, instead of 24-bit+stencil packed into the same buffer? My game doesn't need billions of kilometers, but is still large enough so that distant objects are flickering a lot, so a bit more precision (not too much) would be welcome. |
Last time I checked, 32-bit depth buffer is poorly supported on some GPUs. I'd check integrated graphics and mobile especially, which tend to default to a 16-bit depth buffer unless forced to use a 24-bit depth buffer. |
Well I would not make such a game for mobile or low-end devices anyways (large scale isnt the only reason), so there is that |
I think serious consideration should be made toward having an option to prefer a 32-bit depth buffer, at the very least. Regarding hardware support:
In fact, I can't think of a reason why we shouldn't make the depth buffer 32-bit by default on desktop. If support goes back as far as Direct3D 9, one has to wonder why anyone would stick with 24-bit, given that there are depth precision issues only about 200m away, which is smaller than a lot of games even from the 6th gen era of consoles. This problem is the one remaining wart (aside from shader compilation) that affects general 3D usage in Godot, and it is so easy to fix for most use cases. I don't see why we don't just use 32-bit. |
Also, here is DXVK falling back to 32-bit depth buffer over 24-bit because 24-bit is less supported on some AMD configurations: https://github.com/doitsujin/dxvk/blob/master/src/dxgi/dxgi_format.cpp#L852 |
Is this true for Android too? Or is this just laptop/desktop PCs? |
It's the overall total for all listed platforms (Windows, macOS, Linux, Android, iOS). For Windows: macOS: Linux: iOS: Android: From this data, it looks like 32-bit depth buffer should be default on desktop, whereas mobile might be better off defaulting to 24-bit with 32-bit fallback. |
Hey @cosmicchipsocket, did you ever get around to trying reversed z with a 32-bit depth buffer? I personally would really prefer rev. z over a log depth buffer. IMO it is an elegant solution that nicely distributes the available precision between the near and far plane while still being easily understandable for inexperienced shader authors. |
Great question! In addition, is reversed-z supported on forward+ now? |
AFAIK no one implemented anything in source code yet. But you can implement Log Depth in .gdshader right in the project, so, probably, testing reverse normal buffer can be done like that too? |
We should implement reverse-z as @Ansraer describes above, this is pretty easy to do in the RD renderers, but the OpenGL renderer won't be able to benefit. For the RD renderer, it shouldn't be too difficult to implement, but it will require changes to a few systems:
For the compatibility backend, we can just leave it as non-reversed-z because the extension to benefit from reversed-z isn't well supported and since what we expose to users is non-reversed-z there is no point in reversing it for consistency. |
Thanks @clayjohn for your help! I implemented a Reversed-z engine support (Khasehemwy/godot@0a15540), which can be turned on in the editor's |
Does it function already? Can you share a binary to try it out? Does it include the Mobile renderer too? |
I have tested some samples locally and there is no problem. When the object is far away, the effect is really good and there will be no depth accuracy problem. However, it is likely to be imperfect. For example, if there is a depth comparison of ndc in the built-in shader, it needs to be changed in the case of reversed-z (such as There are binaries here (https://github.com/Khasehemwy/godot/releases/tag/reversed-z), but I'm not sure if they can be run elsewhere. I think it would be better to compile the source code locally. Theoretically, the reversed-z I implemented will be effective when using vulkan (it is written for vulkan code). I am not sure what the difference is between forward mobile and forward+. I haven't tried mobile yet. |
@Khasehemwy this is from the dev chat, rendering section:
|
I would appreciate if you make a PR. |
Also adding the issues related to extended z-far that I have encountered so far to make things linked. Also edited the issue to reflect the recent findings. |
It would be great if godot could support reversed-z. It is a good idea to use reversed-z in all APIs, which can simplify the processing of comparisons of different z values. I plan to add reversed-z support to OpenGL and then make a PR. But no changes will be made to the shader for the time being. Simply adding support for reversed-z may not take too much time, but I probably won't be able to make changes to all shaders in a short time, which requires a lot of changes. |
There is really no point in adding it to the OpenGL backend. I would just leave it alone and save the effort. |
OpenGL does not gain improvements from reversed-z, and for general shaders, regardless of whether OpenGL supports reversed-z or not, the two cases of turning on or off reversed-z need to be handled separately. But for a project that uses reversed-z from the beginning, the user's custom shader can save the need to care about non-reversed-z situations. Okay, let me first change the code of the underlying part of Vulkan into RD, so that I don’t have to change DX12 in the future. |
Take a step back here. From the user's perspective nothing should change, even in the RD renderer. We aren't going to break shader compatibility. This is why I said above that we need to insert a compatibility function into the shader compiler. From the user's perspective we will still be using normal depth. Therefore we can continue using normal depth in the compatibility renderer. It is extremely important that we maintain compatibility when adding this. An another note, I don't think any changes to RD, Vulkan, or D3D12 should be necessary here. You should be able to do everything within the renderer_rd classes and the shader compiler. If you are touching Vulkan code or D3D12 code that you have gone too far. |
I have no idea how to maintain compatibility. I can only think of adding a switch in ProjectSettings to control on or off. I think we have no way of knowing what operations the user has done with depth in the shader. The user can only handle the reversed-z situation by himself. For compatibility, all previous projects can be used after turning off reversed-z (these previous projects should not require reversed-z). For new projects, we can design based on reversed-z from the beginning. |
Yeah, I have to agree with Khasehemway on this clay. There is no way we can accurately detect when and how the user access depth. And even if we could, we really don't want to. Besides, if the engine automatically un-inverts depth before user code we would loose the addition precision, which means that any user code will produce artifacts. I think that we really should just rip that band aid off and change it. Trying to avoid compat breaking changes is a noble principle, but in some cases it just generates more technical debt and results in a too high cost for new users to be worth it. And we really need rev depth, even for non double precision builds. It will probably also solves some z-fighting and precision issues that can already happen at medium distance. |
It's really easy actually. We already detect it in our shader precompiler as we need to convert the UV coorda to 3 channels when using XR. All we have to do is insert a wrapper function around all calls to To be clear, compatibility breakage is not an option at this stage in our release cycle. Any solution we go with needs to maintain backwards compatibility. At most, I would add a render mode to reverse depth in user code for users that need to maintain best precision within their shader. |
So long as maximum precision option is there - I am absolutely happy! |
The big concern I have with a compatibility layer is that this could result in unexpected behavior on the part of shader developers. I also don't know if having an additional level of redirection would hurt shader performance. Plus I'm not sure what the implications would be for the future (would we remove the bandaid in the next major version of Godot? How would we handle that?) Personally I'd like to see this as a project setting, maybe have it default to reverse Z for new projects and normal Z for migrated ones. This could cause issues for existing resources like older stuff on Godot Shaders though. Also I feel like project settings is already quite crowded with tweakables. |
Maintaining compatibility actually means that when using reversed-z, all previous shaders or plug-ins can still be used, right? I think this will be difficult. If when using reversed-z, all
I also think it would be confusing to find out that the depth texture is reversed-z and what I get from the shader is normal z. Maybe we can turn off reversed-z by default, because reversed-z is only needed in larger scenes, which is a relatively special requirement. For shader developers, they need to deal with two situations. |
I agree that I can't see any way to reliably implement some kind of auto conversion in the shader parser. While it should be fairly simple to detect basic use cases and convert them there are too many possible scenarios for us to catch all of them. And you are totally correct that this would also impact the fragment position and the project matrix. I spent some time thinking about this while playing with the vertex shader and really can't come up with a solution I am happy with. Turning off rev depth in non double builds also isn't an option. The more uniform depth precision means that we also get less z fighting and more stable precisions even at mid distance, so it would be a definite improvement for small to mid sized scenes as well. I have added this to tomorrow's render meeting schedule, hopefully we can reach some kind of agreement then. |
We discussed this earlier in the week in the regular rendering team meeting. The production team also discussed it today in our weekly meeting. Here are the main results from our discussion:
The rendering team ultimately concluded that we should either break compatibility now and switch to always using reverse-z. Or we need to wait until Godot 5 and break compatibility then. The Production team agreed that we could go ahead and break compatibility. Therefore, the next steps for this proposal are to create a PR implementing Reverse-Z:
|
Thank you in advance! |
Thank you all! |
Reverse-z buffer solution proposed
Khasehemwy/godot@0a15540
Related issues
Issues related to extended z-far that I have encountered:
godotengine/godot#86275
roalyr/godot-for-3d-open-worlds#15
Describe the project you are working on
I am working on a space game GDTLancer, which is designed to have large-scale objects at large distances (of the same magnitude as camera far plane distance maximum value) which can be reached and interacted with, such as stellar bodies, megastructures and other constructions that will be visible from far away, which means that camera far plane is set to be at a great distance (Up to 1e15-1e18 units, in my case).
Describe the problem or limitation you are having in your project
Large-scale spaces and rendering of thereof are limited by floating-point precision issues. While such things as jitter could be tackled by floating origin, things such as z-fighting for faraway objects require a solution that addresses the way things are rendered.
Since the above-mentioned objects are meant to be interacted with, a double-viewport approach is not viable.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
Initial solution was to implement logarithmic depth buffer since in GLES backend we had to deal with -1, +1 range.
After 4.x came out we have moved onto Vulkan backend, wherein a better solution can be implemented - a reverse-z depth buffer:
https://developer.nvidia.com/content/depth-precision-visualized
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
A workaround solution for logarithmic depth buffer that was used before (breaks optimization, depth-related effects and shadows
A gdshader solution for lagarithmic depth buffer is as follows:
If this enhancement will not be used often, can it be worked around with a few lines of script?
There is no way to work around this issue in a simple way, without compromising depth-related effects and shadow casting.
Is there a reason why this should be core and not an add-on in the asset library?
This is a feature that was already requested and will be requested again in future, because it allows to implement large open-world games in Godot.
Reverse z depth buffer is beneficial for all kinds of 3D projects and comes at no drawbacks for performance.
The text was updated successfully, but these errors were encountered: