Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PBR Overhaul #220

Merged
merged 73 commits into from
Sep 21, 2024
Merged

Conversation

shadielhajj
Copy link
Collaborator

@shadielhajj shadielhajj commented Aug 29, 2024

PBR Reference implementation

This PR includes a meticulous and exhaustive review of the PBR implementation. We went through the pipeline line by line and made sure each operation is physically grounded and mathematically accurate.

Lygia PBR is still in it's infancy and lacks many features found in production engines, however, we wanted to make sure that the core algorithm is correct, so we gathered the principles found in industry-standard engines (Unreal, Unity, Frostbite, Filament), as described by their authors.

  • Brian Karis: Real Shading in Unreal Engine 4
  • Sébastien Lagarde: Moving Frostbite to Physically Based Rendering 3.0
  • Sébastien Lagarde: The road toward unified rendering with Unity’s high-definition rendering pipeline.
  • Dimitar Lazarov: Summary of Physically Based Shading in Call of Duty: Black Ops
  • Romain Guy: Physically Based Rendering in Filament

As always Matt Pharr's "Physically Based Rendering, from theory to implementation" is an exhaustive reference for any PBR work.

We also added some new features, as detailed below. Note that this PR is centred on core PBR (lighting/pbr.glsl). Future PRs will look at clear coat, anisotropy, sheen, refraction, etc...

To validate our implementation we compare it with a reliable production-grade engine (Google Filament). The new iteration represents a significant increase in quality and accuracy across the board.
Back to Back

Raymarched PBR also looks much nicer now, including gorgeous rough metals, thanks to importance sampling.
Before
Screenshot 2024-09-13 224146

After
Screenshot 2024-09-13 224120

New PBR Features

Importance Sampling

The rough appearance of a surface is a result of the interaction between an IBL's irradiance the the surface BRDF:

$$L_{out}(n, v, \Theta) = \int_\Omega f(l, v, \Theta) L_\perp(l) \left< n \cdot l \right> \partial l$$

This implies integrating over the surface hemisphere and basically convolving with the BRDF. Real-time engines typically solve this problem by prefiltering the IBL with a convolution kernel and applying scene-specific components (such as $$N \cdot L$$) at runtime. However, Lygia doesn't include support for calculating the pre-filtered image (we would have to resort to an external too) and requires a pipeline to be set-up for this.

Another approach is to use a relatively complex technique called Importance Sampling which is a very decent approximation of pre-filtering and can be used in real-time. We have therefore implemented this method in Lygia. Since the operation comes at a small extra cost, is has to be manual enabled with the IMPORTANCE_SAMLING switch.
For more on calculating the environment BRDF, refer to Dimitar Lazarov's excellent Summary of Physically Based Shading in Call of Duty: Black Ops.

If IMPORTANCE_SAMPLING is unspecified, Lygia stick to the default current behaviour which is directly sampling the box-filtered mip-mapped cubemap. It's a very rough approximation, but might be good enough in some situations, especially dieletrics. For semi-rough metals, we consider the extra cost associated with Importance Sampling well worth it.

Note that IS is a high-end rendering feature and is currently only supported on Desktop GLSL and HLSL.

In the following table we compare the current rendering, with IS, as well as with a pre-filtered cubemap for reference (the pre-filtering stage is done using Filament's cmgen tool).
Note how rough metals exhibit unwanted darkening in the current version. This is an issue that will be discussed in the next paragraph (Energy Compensation) and that it also solved thanks to Importance Sampling.

Importance Sampling

Energy Compensation

The Cook-Torrance BRDF accounts for a single bounce of light. However, multiple bounces are the norm at high roughness, which leads to a loss of energy and means the approximation is not energy-preserving.

The visual effect is a darkening of the surface at high roughness especially for metals. Here's an illustration of this phenomenon, borrowed from the Filament white paper.
material_metallic_energy_loss
A correct, energy-preserving multi-scattering model:
material_metallic_energy_preservation

The phenomenon can also be observed in the table under the Importance Sampling section.

To solve this issue, we apply a correction factor take from Sébastien Lagarde's presentation "The road toward unified rendering with Unity’s high definition rendering pipeline." It trivially calculates an energy compensation factor from the environment BRDF. Since We need the environment BRDF anyway for Importance Sampling, energy compensation is automatically applied when IS is enabled with the IMPORTANCE_SAMPLING flag.

Reflectance Property

The Material struct currently includes an f0 parameter which is very difficult to use (unless you're a light scientist). We replace it with an artist-friendly reflectance parameter. The parameter goes from 0 to 1 with 0.5 corresponding to an $$f0$$ of 0.04 (most dielectric material). The default value is fine for most cases but can be cranked all the way up to 1.0 if gems of high-gloss material are required.
Hi-Gloss

Combined with Metallic and Roughness, this adds a whole new palette of surface to play with.
From the Filament white paper again: From top to bottom: varying metallic, varying dielectric roughness, varying metallic roughness, varying reflectance.
material_parameters

Physical Camera Exposure

We added a new exposure function under lighting. This allows us to use physical lights units and camera settings instead of arbitrary normalized values. For example to expose for sunlight (110,000 lux using a camera aperture of f/16, shutter speed of 1/125s and film sensitivity of ISO 100) we write

#define LIGHT_INTENSITY 110000*exposure(16.0, 1.0/125.0, 100.0)

Note that this is entirely optional and does not affect the way the LIGHT_INTENSITY param works.

Disney Roughness reparametrisation

We apply the classic Disney reparametrisation of roughness described in the Burley paper Physically-Based Shading at Disney including the clamping to 0.045 recommended by Sébastien Lagarde in Moving Frostbite to PBR.

$$\alpha = perceptualRoughness^ 2$$

This allows for a better visual distribution of roughness throughout the entire range. The following comparison (from the Filament white paper) shows the new perceptually linear roughness (top) to the previous numerically linear roughness (bottom).
material_roughness_remap

Bugfixes and Corrections

Refactored Cook-Torrance

The current implementation of Cook-Torrance in specular/cookTorrance def doesn't seem accurate to me. For one thing, the geometric term doesn't take the roughness into account, which completely defeats the point.
On the other hand there is a correct implementation of Cook-Torrance in specular/ggx but the file and functions are misnamed. GGX is an NDF (a distribution function, one of three components of Cook-Torrance), not a BRDF. So this file should be named cookTorrance.

I thus deleted the previous implementation in specular/cookTorrance and renamed specular/ggx to specular/cookTorrance.
I also removed the fresnel field from shadingData as it's unused in the updated Cook-Torrance implementation.

For a full dissertation of the geometric term, Heitz's paper Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs is an incredibly thorough reference.

Correct Light Falloff Equation

The light falloff equation in lighting/light/falloff doesn't seem correct, for one thing, it's missing a crucial component which is the division by distance square. I replaced it with Brian Karis' equation in Real Shading in Unreal Engine 4

$${falloff} = \frac{saturate(1-(distance/lightRadius)^4)^2}{distance^2+\epsilon}$$

Other Improvements

  • Refactored indirect lighting into lighting/lightIndirectEvaluate
  • Refactored initialization of ShadingData into shadingDataNew
  • Improved roughness to environment map LOD mapping, even if not using Importance Sampling
  • lighting/specular now returns a vector, as metallic speculars are colored
  • PBR now uses a constant Lambert BRDF instead of Oren-Nayar. The consensus since Karis and UE4 is that Oren-Nayar and Burley are simply not worth the additional cost. The cost of a constant Lambert is almost zero.

@@ -28,5 +27,5 @@ license:

#ifndef FNC_SPECULAR
#define FNC_SPECULAR
float specular(ShadingData shadingData) { return SPECULAR_FNC(shadingData); }
float3 specular(ShadingData shadingData) { return float3(1.0, 1.0, 1.0) * SPECULAR_FNC(shadingData); }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curious about this. Why returning a vector over a float?

Copy link
Collaborator Author

@shadielhajj shadielhajj Sep 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question. The specular highlight of metals is coloured, unlike that of dielectrics which is monochromatic. This is why the f0 of dielectrics is a scalar value (typically 0.04) while the f0 of metals in our workflow is the colour of the specular (which we input as the albedo property).

It then comes as no surprise that the fresnel component of the Cook-Torrance BRDDF returns a vector, previously this was handled with this horrendous hack float F = fresnel(vec3(_fresnel), LoH).r; which took the red channel of the coloured specular of turned it into a monochromatic value.
specularCookTorrance now correctly returns the specular highlight as an RGB vector. The quirky construct in lighting/specular accounts for the fact that some SPECULAR_FNC return a scalar and some return a vector.


#if defined(SCENE_SH_ARRAY)
Fd = shadingData.diffuseColor * (1.0-specularColorE);
Fd *= sphericalHarmonics(shadingData.N);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a tonemap() function to the result of the SH sampling seems to solve the weird tinting. This is how it used to be here

Fd *= tonemap( sphericalHarmonics(shadingData.N) );

#include "../../color/tonemap.glsl"
    ...
    Fd *= tonemap( sphericalHarmonics(shadingData.N) );

@shadielhajj
Copy link
Collaborator Author

shadielhajj commented Sep 14, 2024

@patriciogonzalezvivo Thanks for testing and reporting these issues. I've gone through them and fixed almost all. I've also fixed and commented on the individual review items above.
Unless you're seeing more issues, I think the only big outstanding issue is the SH bug (see above). Let me know how you would like to proceed with this.

  • Artifacts in pbrLittle : FIXED
  • Artifacts in generative_psrdnoise.frag: this is actually a bug in the example, the normal is not normalized, it should be: material.normal = normalize(v_normal - N * 0.15);
  • Artifacts in pbrGlass: FIXED
  • PBR Cubemap tinge (due to Spherical Harmonics): See comment above.

@shadielhajj
Copy link
Collaborator Author

shadielhajj commented Sep 17, 2024

@patriciogonzalezvivo keen to hear your thoughts on the amends👆 when you have a sec.
I have more PBR work lined up (Transmission, Sub-Surface, Raymarched Reflections!!!) but would like to get the core model working and merged before that.

@patriciogonzalezvivo
Copy link
Owner

Hi @shadielhajj! Sorry for the late replay. I just try it and seems most of the issues were resolved! thank you for taking a look to those!

The one remaining seems to be the weird colors on pbr()

image

Seems to be comming from the SCENE_SH_ARRAY

if I undefine the SCENE_SH_ARRAY I see this

image

@patriciogonzalezvivo
Copy link
Owner

Applying a tonemap to it this is how it looks

image

@shadielhajj
Copy link
Collaborator Author

Hey @patriciogonzalezvivo. No worries and no pressure. Thanks for re-testing!

I am aware of the SCENE_SH_ARRAY. My theory is that it's an issue with how SH are calculated in glslViewer. Copying here a comment I made earlier:

Unless I'm missing something, it makes no sense to tonemap the SH. It also defeats the entire point of using HDR environments maps. Tonemapping should only be applied at the end of the entire pipeline, right before gamma correction.
I think what's happening here is that this tonemap function is hiding the fact that the SH are incorrectly calculated in glslViewer. These are SH for Arches_E_PineTree_3k.hdr as calculated by Filament's cmgen:

SH[0] = vec3( 0.453080505132675,  0.269415318965912,  0.272140741348267); // L00, irradiance, pre-scaled base
SH[1] = vec3(-0.303828001022339, -0.082883290946484,  0.077946774661541); // L1-1, irradiance, pre-scaled base
SH[2] = vec3(-0.089529335498810,  0.026105811819434,  0.098039552569389); // L10, irradiance, pre-scaled base
SH[3] = vec3( 0.084259107708931,  0.021727809682488, -0.044659141451120); // L11, irradiance, pre-scaled base
SH[4] = vec3( 0.003143192501739, -0.019233280792832, -0.043451633304358); // L2-2, irradiance, pre-scaled base
SH[5] = vec3( 0.055044256150723,  0.053168524056673,  0.076159119606018); // L2-1, irradiance, pre-scaled base
SH[6] = vec3( 0.010047108866274,  0.009981412440538,  0.012977552600205); // L20, irradiance, pre-scaled base
SH[7] = vec3( 0.018381055444479,  0.038365501910448,  0.053405638784170); // L21, irradiance, pre-scaled base
SH[8] = vec3(-0.031671825796366, -0.008322757668793,  0.003398897591978); // L22, irradiance, pre-scaled base

Note that to use the SH generated by cmgen you need the following SH function:

vec3 sphericalHarmonics(const vec3 sh[9], const in vec3 n) {
    return SPHERICALHARMONICS_TONEMAP ( max(
        sh[0]
#if SPHERICALHARMONICS_BANDS >= 2
        + sh[1] * (n.y)
        + sh[2] * (n.z)
        + sh[3] * (n.x)
#endif
#if SPHERICALHARMONICS_BANDS >= 3
        + sh[4] * (n.y * n.x)
        + sh[5] * (n.y * n.z)
        + sh[6] * (3.0 * n.z * n.z - 1.0)
        + sh[7] * (n.z * n.x)
        + sh[8] * (n.x * n.x - n.y * n.y)
#endif
        , 0.0) );
}

The results are correct:
image

Which confirms the hypothesis of the SH being incorrectly calculated in glslViewer.

Does that make sense? How would you like to proceed?

I fully appreciate that doing a full review of the SH pipeline across Lygia and glslViewer is a significant task, so I'm happy to leave to bring back the tonemap for now (even though I suspect it's inaccurate) as a temporary fix, and come back it this issue later.

Let me know your preference :-)

@patriciogonzalezvivo
Copy link
Owner

So I could add the tonemap, I guess my question is that produce this difference bellow. This is a product of the new PBR ranges? Is this expected?

image

@patriciogonzalezvivo
Copy link
Owner

Sorry, some how I miss the message you wrote earlier. Sorry!

@patriciogonzalezvivo patriciogonzalezvivo merged commit acb268f into patriciogonzalezvivo:main Sep 21, 2024
@patriciogonzalezvivo
Copy link
Owner

@shadielhajj I will review the process to calculate the SH. Thank you for all your work! I was playing with it today and it looks stunning! I'm excited for the work a head you are planning.

Thank you.

Patricio

@shadielhajj
Copy link
Collaborator Author

Thanks @patriciogonzalezvivo !!

@shadielhajj shadielhajj deleted the review/pbr branch September 22, 2024 09:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants