Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Postprocesing: depth of field #5932

Closed
wants to merge 8 commits into from
Closed

[WIP] Postprocesing: depth of field #5932

wants to merge 8 commits into from

Conversation

numberZero
Copy link
Contributor

Addition to #5913.
Unlike RBA’s original implementation, the blur is Gaussian, variable-strength. That’s still not perfectly correct, but looks nice IMO.

Default settings
screenshot_20170606_151028
screenshot_20170606_151032

Strong (strength=0.5, limit=10)
screenshot_20170606_150948
screenshot_20170606_150955

@numberZero numberZero changed the title Postprocesing: depth of field [WIP] Postprocesing: depth of field Jun 6, 2017
@numberZero
Copy link
Contributor Author

Now works well with undersampling:

Default settings
screenshot_20170606_155550
screenshot_20170606_155553

Strong (strength=0.3, limit=10)
screenshot_20170606_155656
screenshot_20170606_155659

FPS is reasonable as there aren’t that many pixels to process.

@paramat
Copy link
Contributor

paramat commented Jun 6, 2017

Does the DoF extend further behind the focus point than in front of it? This is essential to not look like a fake-miniature:

mini18xk3

As the focus point moves away from the player the DoF needs to extend exponentially further behind it.
Think of looking at a fairly distant hill, if a distant mountain was behind it both would be in focus, as would the moon behind the mountain and stars. At a certain large distance DoF becomes effectively infinite behind the focus point.

At large distances the DoF in front of the focus point gets larger too but not as much, a close object in front of the hill would have some blur.

DoF only becomes symmetrcally limited at very close distances, which is how fake-miniature photography works.

So some simple maths to independently alter DoF in front and behind is needed to create realistic behaviour.

@paramat
Copy link
Contributor

paramat commented Jun 6, 2017

Geneally though i think DoF is unusable in a game becuse you need to look at all parts of the screen and have them in focus, the player does not stare at the screen centre, so the result is eyestrain as the eye tries to focus on something that can't come into focus.
It's fine in anime because there it is intended to guide your attention and focus.

Now imagine you are looking through nearby plants at a distant view, as the crosshair crosses the plants the focus will bounce strongly.

For these reasons unfortunately i have to 👎 but don't worry others will probably support it.

@Fixer-007
Copy link
Contributor

Fixer-007 commented Jun 6, 2017

I prefer realistic depth of field, not artistic one. For realistic one eye hyperfocal distance is probably needed, it seems it depends on eye itself, light contidions, etc, but search via google gave numbers like 2-3 meters for hyperfocal distance (from old 1957 year research paper it seems). And for <2-3 block distances depth of field must be calculated (in realistic way),

So blurring should progress when viewing something closer than 3 blocks away, like this pillar https://i.imgur.com/rk6Ec9K.png. Viewing >3 blocks gives no blur on anything. On your last screenshot, you stand almost few nodes away, yet it literally looks MACRO like with blur on both sides, it simply does not look like that IRL.

Back in the time I've actually tried RBA's depth of field shader and it was very painful to use, since everything was blurred.

@kilbith
Copy link
Contributor

kilbith commented Jun 7, 2017

I'm siding with paramat, your DOF needs to be far more distant than that.

@numberZero
Copy link
Contributor Author

On your last screenshot, you stand almost few nodes away, yet it literally looks MACRO like with blur on both sides, it simply does not look like that IRL.

Have you noticed the “strong” label? Such settings aren’t suitable for conventional playing, of course. They may only be suitable to make cool videos.

So, that’s why I separated this from #5913: it is unfinished but controversial already.

@numberZero
Copy link
Contributor Author

@juhdanad Can you test this? The selection box (wireframe, not halo) flickers on my system, and I don’t understand the reason. Changing the depth texture to RGBA (as in RBA’s implementation) reduces that but doesn’t solve.

@juhdanad
Copy link
Contributor

Sorry, but I don't see any flickering.

@tobyplowy
Copy link
Contributor

may only be suitable to make cool videos.

@numberZero And screenshots :)

This would make a big impression for the minetest community
I would like to play with this enabled :)

I need something to spend my remaining 70 fps XD
Siriusly :/

@tobyplowy
Copy link
Contributor

@numberZero Maybe you can add a depth of Field Intensity setting?

That will make @paramat and @kilbith happy because they can set it low or off if they whant to
And the other poeple get DOF everyone is happy :)

@paramat please just be happy :/

@numberZero
Copy link
Contributor Author

numberZero commented Nov 7, 2017 via email

@paramat
Copy link
Contributor

paramat commented Nov 8, 2017

tobyplowy this will of course be optional, it always was going to be. My objection concerns the DOF distances in front and behind being unrealistic (RBA's method is bad). DOF seems bad for the eyes for gaming, as i explained, which is my other concern, so i can't see this as justified since it's for mostly for screenshots.
And no i won't just be happy with bad ideas.

@kilbith
Copy link
Contributor

kilbith commented Nov 8, 2017

Can't you just stfu and stop tagging me, toby...

@tobyplowy
Copy link
Contributor

paramat Sorry to go offtopic but this is important to me!
kilbith (I'm not taging you like you asked) please just be a bit Nicer and more cooperative
You have to know that you may only see this text but that doesn't mean there's no human behind this comment,
A human with emotions (just like you) may I add.

@paramat
Copy link
Contributor

paramat commented Nov 8, 2017

I think bloom is more worthwhile to work on.

@tobyplowy
Copy link
Contributor

tobyplowy commented Nov 8, 2017

@paramat I think both of them would be great (Dof and bloom)
But bloom isent too impressive without emission map support
(Without emission map support there is not a lot of stuff to put bloom on also it's a nice feature)

@HybridDog
Copy link
Contributor

HybridDog commented Nov 11, 2017

I think bloom should be added in the fog before applying fog because to me bloom looks like light distorted in fog, smoke or similar.
fog image
https://duncan.co/wp-content/uploads/2016/12/Duncan-Rawlinson-Photo-244954-Foggy-Night-Toronto-Ontario-Canada-20150616-CF011544-Lights-In-The-Fog.jpg

@rubenwardy rubenwardy added Feature ✨ PRs that add or enhance a feature Action / change needed Code still needs changes (PR) / more information requested (Issues) labels Dec 4, 2017
@paramat
Copy link
Contributor

paramat commented Dec 6, 2017

I seem to remember talk of MT shaders not being able to use a depth map, if so how is DoF implemented?

@rubenwardy
Copy link
Member

I seem to remember talk of MT shaders not being able to use a depth map, if so how is DoF implemented?

Huh, that doesn't sound right as you can just write the depth map to a buffer then use it there. Depth map is relative to the camera, Minetest being infinite world shouldn't affect it

@paramat
Copy link
Contributor

paramat commented Dec 6, 2017

I must have misunderstood.

@Megaf
Copy link
Contributor

Megaf commented Dec 9, 2017

I think that's a nice work but more work is needed. I personally don't like the Gaussian blur much. Blur is something very tricky to implement and very, personal.

@numberZero
Copy link
Contributor Author

@Megaf That’s all very personal. You don’t have to enable the blur. Or you might even write your own postprocessing shader, or use a thrid-party one.
Anyway, this PR is obsolete since 2884196.

@numberZero numberZero closed this Dec 10, 2017
@HybridDog
Copy link
Contributor

HybridDog commented Dec 15, 2017

In minetest lots of faces of nodes far away are just a few pixels small.
Is it possible to implement oversampling (render the image with 2x size and then downscale it) using shaders?

@numberZero
Copy link
Contributor Author

@HybridDog Possible, but there are mipmaps and FSAA for that.

@HybridDog
Copy link
Contributor

Unfortunately FSAA uses simple linear downscaling, doesn't it? And mipmaps smoothes tilted faces, which makes the white line on the middle of a street (from some streets mod) look smoothed.
Disabled FSAA looks the same as FSAA with simple subsampling I think and disabled mipmaps often causes Moire artifacts.
There's a downscaling algorithm which solves both problems (smoothing vs jagged artifacts): https://graphics.ethz.ch/~cengizo/Files/Sig15PerceptualDownscaling.pdf
Can you use a custom downscaling algorithm for mipmaps and FSAA?

@numberZero
Copy link
Contributor Author

Can you use a custom downscaling algorithm for mipmaps

Possible; it is even possible to use custom images as mipmaps (using ITexture::lock).

and FSAA?

Basically, no.
And I was a bit wrong about oversampling: it is possible in plain OpenGL, but IrrLicht does not support render targets larger than the screen.

@HybridDog
Copy link
Contributor

Thanks, I guess using that perceptual downscaling algorithm for mipmaps could improve image quality over disabled mipmapping because it doesn't smooth and may fix Moire artifacts and flickering.

@HybridDog
Copy link
Contributor

HybridDog commented Dec 27, 2017

subsampling (corresponds to disabled oversampling):
subsam

ssim perceptual downscaled image, downscaling factor 4 (l hope I implemented it correctly):
ssim_perc_ycbcr_4_4

The ssim perceptual downscaled image shows a lot more details, e.g. in the textures, while keeping the contrast.
Additionally it acts as antialiasing filter (similar to fsaa).

Of course, the text and formspecs shouldn't be downscaled in minetest, they're in the images only for testing purposes.
I used the algorithm with ycbcr instead of rgb colourspace because with rgb colourspace there are places where red and blue pixels appear, e.g. in the text:
ssim_perc_rgb_4_4

tbh, I don't know where to start implementing it in minetest, the algorithm is quite simple and can be parallelized (many for loops for calculating at each pixel).

@ThomasMonroe314
Copy link
Contributor

ThomasMonroe314 commented Dec 28, 2017

@numberZero instead of closing this, would you be willing to change the PR to implementing a post-processing stage, to allow others to play around with the feature? I'm just asking because the framework is here and would be essential to other graphical features such as bloom

@HybridDog
Copy link
Contributor

There're already lots of other (closed) issues and pull requests regarding postprocessing.

@numberZero
Copy link
Contributor Author

@ThomasMonroe314 Maybe, but that’s not that simple as IrrLicht lacks many useful features.
Also note that there is no framework, only some more or less messy code.

@numberZero
Copy link
Contributor Author

@ThomasMonroe314 @HybridDog #5446 is still open.

@ThomasMonroe314
Copy link
Contributor

@numberZero Ok! I didn't see that. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Action / change needed Code still needs changes (PR) / more information requested (Issues) @ Client / Audiovisuals Feature ✨ PRs that add or enhance a feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants