Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adopting a Physically-based Microfacet BRDF model in Three.JS #5847

Closed
bhouston opened this issue Jan 2, 2015 · 37 comments
Closed

Adopting a Physically-based Microfacet BRDF model in Three.JS #5847

bhouston opened this issue Jan 2, 2015 · 37 comments

Comments

@bhouston
Copy link
Contributor

bhouston commented Jan 2, 2015

I would like to contribute physically-based microfacel reflection model in a core ThreeJS shader. The motivation for moving towards Physically-based Shading can be found in this presentation given at SIGGRAPH 2012:

http://blog.selfshadow.com/publications/s2012-shading-course/hill/s2012_pbs_importance.pdf

The core approach would be the adoption of the "Generalized microfacet model", otherwise known as a Microfacet BRDF, as the basis of the specular shading equation. The equation consists of the following:

While I do suggest starting with the above approximations, we can evolve the shader so that these models are pluggable.

Additionally there will be a diffuse term (implemented as energy conserving, thus whatever remains from the specular BRDF) which can be one of either:

After the above is implemented, I would like to add both anisotropy and metallic support. The last thing after that would be multi-sampling of the environmental reflections to support unpolished reflections.

This shader will never really match the PhongMaterial because the Phong reflectance model is significantly different than the Microfacet model, and the parameters are different. Thus if we want backwards compatibility, I would suggest the new Physically-based shader is implemented as either MeshPhysicalMaterial or MeshMicrofacetMaterial, or something along those lines and we leave the existing MeshPhongMaterial alone.

The result of this work should be a shader system that rivals the top game engines and film renderers.

I have already implemented nearly everything in a private fork of ThreeJS and it is live on https://Clara.io as the Physical Material. The only things left to implement are multi-sampling of the environmental reflections, and Oren-Nayer reflectance model. But before I start to do PRs to the primary ThreeJS branch I would like to prepare you guys and get approval as it will be a lot of work.

More information on Physically-based Shading can be found here:

http://blog.selfshadow.com/publications/s2012-shading-course/
http://blog.selfshadow.com/publications/s2013-shading-course/
http://blog.selfshadow.com/publications/s2014-shading-course/

@bhouston
Copy link
Contributor Author

bhouston commented Jan 2, 2015

The schedule for implementation would be something like this:

  • Get the common.glsl and related PRs merged (if they are to be merged), so I know the context of implementation.
  • Implement a MeshPhysicalMaterial (or some other name the community agrees upon) with the basic GGX, Schlick Fresnel and GGX visibility with a Lambertian diffuse model with energy conservation and PR it.
  • Add basic reflection map support.
  • Add metallic support via specular dominance along with an edge tint color: http://jcgt.org/published/0003/04/03/paper.pdf
  • Add anisotropy, anisotropy rotation support.
  • Add multi-sample reflection maps to allow for varying degrees of non-polished reflections.
  • Add support for Oren-Nayar diffuse reflection.

@WestLangley
Copy link
Collaborator

I am in complete support of this.

This shader will never really match the PhongMaterial because the Phong reflectance model is significantly different than the Microfacet model, and the parameters are different

The MeshPhongMaterial model can be viewed as a special case of a general BRDF model, as explained here. But I agree with the BRDF approach. It should have pluggable components. GGX, Schilck, and Walter are reasonable defaults.

leave the existing MeshPhongMaterial alone

I agree. And develop a separate MeshPhysicalMaterial, or whatever you want to call it.

Additionally there will be a diffuse term (implemented as energy conserving)

The energy conserving models are true approximations -- the game engines each appear to have their own implementation. Just remember that there is no correct model here.

@mrdoob
Copy link
Owner

mrdoob commented Jan 3, 2015

MeshPhysicalMaterial sounds good to me.

@bhouston
Copy link
Contributor Author

Just a note. Myself and @WestLangley have been talking off list about how to implement the variable light probe sampling based on surface roughness. There is some discusssion about whether it requires textureCubeLOD in the fragment shader or whether textureCube with the bias parameter is sufficient. My belief is that one can implement automatic lightprobe filtering via the bias parameter.

The reference for using textureCubeLOD is here: https://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/ (Shared by @WestLangley )

I am implementing a test of textureCube light probe sampling via roughness using bias as we speak. I'll share the results to see if it is works, or sort of works.

@bhouston
Copy link
Contributor Author

I found this:

https://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_texture_lod.txt

which let me do this test of varying roughness:

roughnessvarying

....and this test of vertical strips of varying roughness on the Stanford Bunny:

lod_lightprobe_blurring

@mrdoob
Copy link
Owner

mrdoob commented Mar 13, 2015

/ping @spite

@spite
Copy link
Contributor

spite commented Mar 13, 2015

I've been using the bias parameter in the texture lookups, and also texture{2D|Cube}LOD. Here or here for instance.

It works really well (it's fast and convincing), but I'm not entirely sure the distribution of the mipmap blurring is accurate. At least, when I've used cosine convolution on the environment map, I see noticeable differences when the roughness is high.

The main problem with PBR is not the shader itself, is deciding what maps and parameters to support so the artists can create awesome materials.

@WestLangley
Copy link
Collaborator

The main problem with PBR is not the shader itself, it is deciding what maps and parameters to support so the artists can create awesome materials.

Exactly. This is the discussion we need to have -- what is the API (and features) we want to support?

Later, we can experiment with various impelmentation techniques.

@bhouston
Copy link
Contributor Author

We'll I've been working on this material definition with some input from @meshula and @brentb and based on my studies of UE4, Unity 5, the Disney PBR paper and my own implementation of the Disney PBR in Clara.io:

https://docs.google.com/document/d/1EUqnM-aQwl76aZpduQk-EL0vGEPHrJE7h1x7wSqqLlk/edit#

The motivation is to make this a standard for transferring around PBR materials between UE4, Unity, Substance (Allegorithmic), https://Clara.io (of course), and other tools. The Allegorithmic/Substance guys are interested in supporting it, and I brought it up with the Sketchfab guys as well.

I would appreciate more input so we can get wide support and have it polished.

@bhouston
Copy link
Contributor Author

Main references:

Disney PBR (old model, new model to be presented at SIGGARPH 2015):
https://github.com/wdas/brdf/blob/master/src/brdfs/disney.brdf

Frostbite 4 PBR implementation details:
http://www.frostbite.com/wp-content/uploads/2014/11/course_notes_moving_frostbite_to_pbr.pdf

UE4 PBR implementation details:
https://de45xmedrsdbp.cloudfront.net/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf

A better subsurface model (recommended by Nick Porcino):
http://graphics.pixar.com/library/TexturingBetterDipole/paper.pdf

@mrdoob
Copy link
Owner

mrdoob commented Mar 15, 2015

Looks great to me. The more compatible with other engines the better indeed.

@mrdoob
Copy link
Owner

mrdoob commented Mar 16, 2015

@bhouston
Copy link
Contributor Author

Nice catch @mrdoob ! I was not aware of this paper and it is amazing. Anders Langlands (@anderslanglands), the author of the Arnold PBR paper, is aware of the proposal I'm spreading around and has shared some feedback. He is of the opinion (I believe, it is hard to speak for others accurately) that a PBR interchange standard for high end production shaders is not desirable or useful, but I think he is okay with us standardizing things for lower end usage like UE5/Unity5/ThreeJS. I think we can grab a lot of good ideas form this paper of Anders.

@anderslanglands
Copy link

Hi everyone! I think an interchange format is definitely useful, my concern was that choosing one particular layered model, and having shaders without material networks to define texture inputs and remapping functions would be quite limiting. Of course, one has to start somewhere...

I did a bit more thinking about this after my last conversation with Ben. I think that it would probably be best to define things in terms of individual BSDFs, and have a language for stacking them. Maybe something like the below. Still leaves a lot of room for interpretation (which could be a good or a bad thing depending on the situation), but also means no-one's tied to implementing anything in a particular way.

{
   "shader": "glass",

   "layers": {

      {
         "bsdf": "ggx_trans",
         "roughness": 0.0,
         "ior": 1.5
      },

      {
         "bsdf": "ggx_refl",
         "roughness": 0.0,
         "ior": 1.5
      }
   }
}

{
   "shader": "skin",

   "layers": {

      {
         "bsdf": "sss",
         "dmfp": {0.439, 0.375, 0.17}
      },

      {
         "bsdf": "ggx_refl",
         "roughness": 0.5,
         "ior": 1.4
      }
   }
}

{
   "shader": "carpaint",

   "layers": {

      {
         "bsdf": "diffuse",
         "roughness": 0.0,
      },

      {
         "bsdf": "ggx_refl",
         "color": {0.55, 0.05, 0.03},
         "roughness": 0.4
         "ior": 1000
      },

      {
         "bsdf": "ggx_refl",
         "roughness": 0.0,
         "ior": 1.6
      }
   }
}

@juancg
Copy link

juancg commented Apr 27, 2015

Hi everyone, I guess I arrive too late to this party but it is an interesting topic. Ander's proposal is quite similar to what we have been doing at maxwell for a lot of years (individual BSDFs stacked in layers) so if you are interested in that approach I would be willing to collaborate.

@WestLangley
Copy link
Collaborator

@juancg If you are willing to provide a simple, live example to demonstrate your ideas that would be welcome. : - )

@juancg
Copy link

juancg commented Apr 28, 2015

Hi West,

With combining BSDFs and stack these combinations into layers I mean this:

https://drive.google.com/file/d/0B1HxSWiw19IaNUFsTzVDY0tSd0E/view?usp=sharing

Layers are evaluated top to bottom and they are basically opacity evaluations driven by value/map.

Inside each layer, BSDFs are combined like paints in a can (with the difference that colors are generated through subtraction like with paints but they are added -i.e blue + yellow does not make green but white). Different BSDF blending models can be exposed although as mentioned leaving a lot of room for interpretation is not clear if is a good or bad thing here. One of the trickiest things is whether fancy blending modes like "additive" are exposed, making it compatible with a system that always preserve energy conservation is not a simple thing.

As Anders says all these things without material networks is quite limiting, specially in some sectors, but we have been using this system for many years and it has shown it is quite flexible..

@bhouston
Copy link
Contributor Author

@juancg ! Long time no chat. We meet at SIGGRAPH in LA I think two years ago. I was the Exocortex Alembic guy. I think supporting layered materials is amazing and thanks for the guide! One issue is that we need to rewrite completely how shaders are created because now we need loops and not just simple search replace code.

@bhouston
Copy link
Contributor Author

bhouston commented May 4, 2015

@juancg I have a question about multi-layer materials. How does one handle normals and bump maps? Is there only a single normal/bump map per material or do they differ between layers? If there are multiple normal/bumps in a layered material, then I assume that they (normal/bump) are additive from bottom layer to top layer (so the top layer is the sum of all normal/bumps of the lower layers)?

@juancg
Copy link

juancg commented May 4, 2015

Hi Ben, of course I remember you, good to talk again :)

Yes, there are multiple normal/bumps, one per bsdf actually. Everytime you evaluate a bsdf you are evaluating a normal map if there is hone active too.

@tstanev
Copy link
Contributor

tstanev commented May 28, 2015

Regarding textureCubeLod vs. using bias -- using bias did not work well for us (there are visible artifacts and also choice of base mip being zoom dependent and decided by the graphics driver) and we had to use textureCubeLod when available to get rid of the artifacts.

@wrr
Copy link
Contributor

wrr commented May 29, 2015

@tstanev one workaround that should allow to simulate textureCubeLod when it is not supported, is to store LoD of a mip in the alpha channel (or in a separate cube texture of the same size). Then one texture read (without a bias) allows to determine base LoD that GPU uses, and the second read can set bias to required LoD - used LoD.

@bhouston
Copy link
Contributor Author

@wrr, that is brilliant. Although if one is having to specify LOD manually in the mipmaps, one can just load up four cube maps (as many as you have LOD levels) and query them manually -- I believe this is what Sketchfab does.

@wrr
Copy link
Contributor

wrr commented May 29, 2015

@bhouston right, this is also what this demo does: http://www.alexandre-pestana.com/webgl/PBRViewer.html but such approach uses more texture units and requires ifs in the shader code. One nice thing is that if a material doesn't have a roughness map, but a constant roughness, two cube textures are always sufficient.

@tstanev
Copy link
Contributor

tstanev commented May 29, 2015

@wrr Yes, I know this approach, but we could not encode mip in alpha, because we needed all 4 bytes for our HDR encoding. Using a second texture of the same size would work, and we could try that on devices that don't have textureLod I suppose. We also use the approach of binding each mip level as separate texture in our SSAO implementation, but that's indeed a lot of "if"s.

@ghost
Copy link

ghost commented Jul 25, 2015

Hi guys,

I would like to know if the MeshPhysicalMaterial is now a reality ? Or still a wip project in development ?
I will need a clean PBR shaders for three.js for a futur project. So i would like to know the status of the shader before starting reinventing the wheel. :)

@WestLangley
Copy link
Collaborator

So i would like to know the status of the shader

It is currently a work-in-progress.

@ghost
Copy link

ghost commented Jul 26, 2015

Thanks for your answer West !

@kearwood
Copy link

kearwood commented Sep 4, 2015

Would enabling textureCubeLOD in Firefox help in landing this in ThreeJS?

We have a bug tracking our progress on this:
https://bugzilla.mozilla.org/show_bug.cgi?id=1111689

@bhouston
Copy link
Contributor Author

bhouston commented Sep 5, 2015

The texture LOD solution isn't a universal solution in WebGL 1.0. We need seamless cubemaps but Chromium won't implement them on Linux and has recommended removing them on Windows -- see discussion here: https://code.google.com/p/chromium/issues/detail?id=479753 WebGL 2.0 fixes the issue though.

@pyalot
Copy link

pyalot commented Sep 9, 2015

@bhouston @mrdoob please note that there are adequate alternatives to cubemaps that are easy to lookup and atlas. One such alternative is octahedral environment maps (there's a chapter about them in WebGL Insights and this is a helpful paper). Unlike cubemaps you can also ensure that octahedral maps are seamless at every miplevel (something that'd require yet another extension for seamless cubemaps in WebGL, although WebGL2 cubemaps are seamless by default).

Below is the octahedral atlas I'm using, horizontal dimension is miplevels, vertical dimension is roughness.

octahedral-radiance-atlas

And this is a comparison between mipmapped and non mipmapped octahedral mapping:

octahedral-mipmapping-test

There are also two variants of octahedral mapping, rectangular and spherical, see a comparison here:
image

Here's a gist with the code to map both mapping types forward and reverse: https://gist.github.com/pyalot/cc7c3e5f144fb825d626 . I have a small artifact with the spherical variants in the X and Z axis I haven't been able to fiture out yet.

For lookups into the atlas I use fwidth of the normal to come up with a measure of angular change, a complete lookup is 4 taps. Below is the code for illustration.

uniform sampler2D textureRadiance;
uniform vec2 radianceSize;

vec3 getRadianceMip(vec2 uv, float vOffset, float lod){
    float size = pow(2.0, lod);

    float hOffset = pow(2.0, lod)-1.0 + lod*2.0;
    vec2 texcoord = (vec2(hOffset, vOffset)+1.0+uv*size)/radianceSize;
    return texture2D(textureRadiance, texcoord).rgb;
}

vec3 getRadianceSlice(vec2 uv, float slice, float angularChange){
    float size = max(128.0, pow(2.0, slice+4.0));
    float offset0 = 130.0*min(slice,4.0);
    float i2 = max(slice-4.0, 0.0);
    float offset1 = pow(2.0, i2+8.0) - 256.0 + 2.0*i2;
    float vOffset = offset0 + offset1;

    float maxLod = log(size)/log(2.0);

    float pixelsPerChange = size*0.7*angularChange; // approximately 1/sqrt(2)
    float lod = log(pixelsPerChange)/log(2.0);
    lod = clamp(maxLod-lod, 0.0, maxLod);

    return mix(
        getRadianceMip(uv, vOffset, floor(lod)),
        getRadianceMip(uv, vOffset, floor(lod)+1.0),
        fract(lod)
    );
}

vec3 getRadiance(vec3 dir, float roughness){
    float angularChange = acos(dot(normalize(dir+fwidth(dir)), dir))/PI;

    vec2 uv = normalToUv(dir);

    float slice = (1.0-roughness)*6.0;
    float slice0 = floor(slice);
    float slice1 = slice0 + 1.0;
    float f = fract(slice);

    vec3 color0 = getRadianceSlice(uv, slice0, angularChange);
    vec3 color1 = getRadianceSlice(uv, slice1, angularChange);

    return mix(color0, color1, f);
}

@bhouston
Copy link
Contributor Author

bhouston commented Sep 9, 2015

@pyalot This is awesome. How can we generate these before hand with GGX or Phong convolutions?

@WestLangley was working on a JavaScript command line tool for precomputation of convolved cubemaps. That may be a project we want to include in ThreeJS for the general community. I'd make it a node.js program personally.

It seems like a moderately inefficient packing algorithm for the resulting textures, but I guess that is for speed of access. I just worry about the size of the texture in GPU memory, it seems like it uses twice as much space as it really needs.

You know you could actually use the manual LOD control features already available in the texture sampling extensions with this -- just use it on a single flat texture rather than the cubemap.

@pyalot
Copy link

pyalot commented Sep 9, 2015

@bhouston I'm generating them using convolution in conjunction with importance sampling (if below 20°) and uniform sampling above, it's implemented on the GPU by additively blending into floating point textures with about 200 samples per pass.

The convolution I use is lambertian reflectance because I've figured out a way to parametrize the exponent to the reflection in terms of angular cutoff. That's quite useful to pick an appropriate exponent for a given desired angular resolution. The angular resolution is implied by the progression trough the levels and therefore halves at each successive roughness slice.

You could probably describe GGX in terms of angular cutoff, but it's far leass easy.

@bhouston
Copy link
Contributor Author

bhouston commented Sep 9, 2015

@pyalot would you want to release this to ThreeJS so that the community can build upon it?

I think we need a convolution generator as part of the ThreeJS project if we are serious about PBR. I can imagine that over time it could get very powerful with a ton of features as do most things that become part of ThreeJS. Although it would be best if it can run without a GPU in node (like via headless-gl) so that it can be automated easily.

@pyalot
Copy link

pyalot commented Sep 9, 2015

@bhouston I'll probably eventually release some convolution envmapping tool. It's not written in three.js.

@bhouston
Copy link
Contributor Author

Completed by #8237

@mrdoob
Copy link
Owner

mrdoob commented Feb 27, 2016

Hurray! Just took us 1 year... 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants