Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PBR extension] Provide example implementation and specify (suggested) shading model #697

Closed
mlimper opened this issue Aug 26, 2016 · 44 comments
Labels
2.0 PBR Physically Based Rendering specification

Comments

@mlimper
Copy link
Contributor

mlimper commented Aug 26, 2016

We need and simple, yet expressive reference application, and especially example shader code for reference.
Also, the shading model we use must be very concretely defined in either the spec or in the appendix, being a suggested shading model.

This also involves to create corresponding, extensive documentation in the spec, especially about the concrete shading model and exact behaviour / semantics of parameters.

@pjcozzi pjcozzi added the PBR Physically Based Rendering label Aug 26, 2016
@cedricpinson
Copy link

There are references implementation well documented:

There is no only one way to write PBR shader so if we provide an example / code sample we will need to choose one implementation.

@erich666
Copy link

erich666 commented Oct 7, 2016

There's "documented" and there's "coded" - I massively prefer coded, as code is clear. One problem with the ancient VRML specification was that the shader (and other) behavior wasn't specified, and 5 browser plugins and viewers implemented 5 different models (gamma correction "per surface" on and off, specular computed with the light at infinity vs. locally (back when we used to carefully count instructions), etc.). Having a consistent look among viewers of glTF PBR is critical - done right, you'll see on your screen what I'm seeing on my screen. We don't have to go so crazy as to implement ICC profiles, assume sRGB, but there's no reason that we shouldn't get everything else right.

An example, in talking with Brian Karis about his SIGGRAPH 2013 paper and how it bears on this proposal, here's his reply to a long email I had with questions.

You wrote here: "We found variable index of refraction (IOR) to be fairly unimportant for nonmetals, so we have recently replaced Specular with the easier to understand Cavity parameter."

BK: That was dumb. We didn't end up doing that. We still have Specular and use it now more than we did.


So going by what's in an article can be misleading at best. There are also times when an article says one thing, but the actual code does something different, for any number of reasons.

I thought I'd pass on some other comments from Brian, since they may be of interest.

Also, to me a separate specularFactor and glossinessFactor is a tad odd, it definitely implies that you’re using two different reflection maps (one sharp, one blurred to some fixed amount), period, and you want to control each separately.

BK: Specular color controls reflectivity amount. Roughness or gloss in their case controls the blurriness of that reflection. There are not and should not be separate settings for analytic lights vs environment reflections. For prefiltered environment map reflections the roughness controls which mip level of the prefiltered environment map you use. It doesn't just have one fixed amount of blur and lerp to it.

The proposal says:
metallicFactor FLOAT [0, 1] 1 The metallic-ness the material (1 for metals, 0 for non-metals).
Can that number be anything but 0 or 1? (semiconductors?).

BK: It is useful for mixed materials or layered materials. Dusty metal for example. Since the lerp isn't exactly linear, anything that is not binary Metallic won't really be correct. DiffuseColor and SpecularColor is completely linear which makes it better for layering.


Eric here: I want to point out that the lerp issue is another case where you want to have code showing what you think is appropriate. Karis and others recommend a non-linear lerp.


Final comment from Karis:

BK: Epic wouldn't really be interested in this file format due to its very limited fixed function. We have some interest in an open format for sharing material node graphs like what Lucasfilm's MaterialX is trying to do.


I think that's fine, by the way. The PBR we give should be a reasonably full-featured model, but not kitchen-sink level. If someone wants a different shader, they can write their own and include it in glTF. Better yet, if they have an implementation handy already, they can modify that as needed. If someone wants their content to look the same and they don't want to write a shader, they can use the PBR model provided and be assured that others will see their content as they intended.

What follows is much of my letter from August 24th to Timo, Patrick, and others. I thought it worth including here, since it makes the case for making one shader implementation for PBR. Long, some isn't relevant and has already been addressed, but I might as well put it in the record for all to see.


Summary: the proposal is a good start, and I like the idea of a basic PBR material that glTF provides support for. Other options are providing a shader "fragment" for the material itself, or using a full shader fragment system such as NVIDIA's MDL. While I think these ideas may have their place in the long-term, giving a standard modern material helps many users right now, without a lot of coding on the reader or writer side of things. Thanks for getting things going on this front.

  • My goal (and yours, no doubt) is to have glTF be as good as it can be. Towards that end, I have some suggestions:
  • Drop the specular-glossiness model as part of the glTF proposal. Since it does the same thing as metalness/roughness, it's simpler to give just one model that is supported in glTF and talk about conversion to and from the other description. Minimizing the amount added to glTF itself is an important principle, I believe.
  • Optional (suggested by a coworker): rename the roughness/metalness (aka metallic, metallicness) model to simply the Roughness model, since the material describes both metals and non-metals.
  • Precisely define each variable, and how variables interact.
  • Lay out what variables can be replaced (or multiplied) by textures and how this works.
  • Add bump and normal maps, as these are common types of texture maps. I would follow three.js's lead for the most part (displacement maps, where vertex locations are changed, I'm not so sure about).
  • And, as you propose, give a sample implementation program, preferably with sliders or dropdowns for every variable you can. Luckily, three.js already has a program that gets you much of the way there.

My background on this: VRML had a serious problem with consistent shader models, their results varied from program to program as people implemented specular highlights in different ways, with the specular power used differently, gamma correction on or off, etc. I wrote a little program showing how the equations should work, to try to get people back on track: http://www.siggraph.org/education/materials/HyperGraph/illumin/vrml/pellucid.html (no longer works, sadly, though it did for about 15 years).

Another point of reference is seeing how the old-but-still-popular Wavefront OBJ format has had some serious loose ends over the years. One undefined area is what is supposed to happen when a material has a diffuse color and a texture map that affects this color. Should the texture's color replace or multiply (aka "modulate") the solid color? 3DS MAX has the texture color replace, for example; I've seen it done differently in other systems.

This is the sort of thing that should be carefully laid out in a specification. If multiplication is done for these two colors, it's probably not done for roughness components, i.e. they're not multiplied together. Multiplying colors has some sense to it, similar to combining spectra; multiplying roughness values together certainly doesn't. However, three.js does exactly this, multiplying the roughness value by the roughness texture value, leading to a strange asymmetry. I've logged this as an issue: mrdoob/three.js@e7b3717

Another example, in Wavefront OBJ the "Tr" material attribute should be equivalent to 1 minus "d". "Tr" is not a part of the original spec, but somehow slipped in over the years. A mistake some OBJ readers (including a few at Autodesk, which I've gotten fixed) make is to consider Tr == d. This leads to bugs such as objects disappearing if Tr is set to 0.0 (fully opaque). Again, explicitly defining all variables and their interactions is critical.

These seemingly minor details bear on how PBR is defined for glTF. You mention, "PBR is more of a concept than a strict set of rules, and as such, the exact implementations of PBR systems tend to vary." I wouldn't go that loose, and that's currently my major concern. There are different approximation equations for various elements in the equations, but the basic equations and approach are not just a concept but have a lot of theory and research behind them. For example, some form of gamma correction (input and output; color textures linearized coming in, output fragments gamma corrected out) must be used so that the computations are in linear space. Otherwise calling the shading PBR is not sensible, and such things as having two lights overlap will look weird - would you agree?

One weak area of the specification is that physically-based lights are not addressed. Such lights have physically-based units. There is also a whole murky area of the various types of tone mapping and which should be preferred or available to users, and how exposure control should be done. However, given that lights are in some sense separable from materials, and that basic WebGL has problems with gamma correction alone right now, I am fine with leaving this area alone with this proposal. That said, gamma correction in some form does matter. If you are not convinced for some reason, try this demo, turn off gamma correction and set both lights to 0.2 - the area they overlap is excessively bright compared to the separate light sources. Three.js, for backwards compatibility purposes, uses a gamma correction factor of 2.0, not 2.2. More importantly, each fragment is gamma corrected on output, vs. taking the final image and gamma correcting it. This matters for transparent objects and for antialiased edges. However, WebGL isn't really up to the task, and to do gamma correction properly you normally need to use 16 bits per channel and a post-processing pass at the end to get the final image - expensive for mobile or slower PCs.

Transparency is another area that is not addressed in the specification. Again, it might be fine to ignore this whole area for now, at best providing a traditional "blend" transparency factor and stress that the model presented is for basic opaque materials and transparency is just basic alpha blending (no Fresnel involved, for example). Similarly, clear coat materials (car paints), anisotropic materials (brushed metal, carbon fiber, etc.) are not included, which I think is fine for now. A basic, solid PBR material will be reasonable for 95% of materials for 95% of users, and can be expanded in future extension proposals.

To get back to your proposal as a whole, the point here is that a single well-defined material model makes for fewer chances for misinterpretation. Having two descriptions (glossiness vs. roughness) giving the same result complicates all glTF file readers for little gain. The only use I see in having two descriptions is from a user interface standpoint: seeing glossiness get used means the UI could have a glossiness label on its slider instead of a roughness label. This sort of information about UI seems better to include somehow else, in metadata or similar, since it's not important for rendering per se. I would therefore drop the specular/glossiness model as part of the specification, mentioning only how to convert to and from it.

We also plan to provide sample code at some point, but unfortunately didn’t manage to do so until now.

My advice is to definitely give a sample implementation, as the code is then ultimately the truth. If you're creating content, you want to know that the receiver of that content is seeing the same thing on their screen as you are. Use three.js's for metal/roughness, for example: http://threejs.org/docs/index.html#Reference/Materials/MeshStandardMaterial - the program's built into the page (and I've submitted a PR that fills in the gaps in its documentation). Having your proposal follow their implementation has a few advantages: there's an existing implementation, and glTF then maps directly to three.js itself, which has a huge user base.

That said, their demo make the goof of not turning on gamma correction in their demo. I tried to use theirs as a starting spot, fixing this and trimming back some controls and adding others, while also using a RawShaderMaterial showing the explicit code in the program itself (i.e., dump their shader to the console.log when compileShader is called). Unfortunately, I ran out of time.

Also, they use slightly different GGX and Schlick Fresnel approximations. For example, three.js doesn't use the typical 5th order version, they use this (from here):

vec3 F_Schlick( const in vec3 specularColor, const in float dotLH ) {
float fresnel = exp2( ( -5.55473 * dotLH - 6.98316 ) * dotLH ); return ( 1.0 - specularColor ) * fresnel + specularColor;
}

They also use "G_GGX_SmithCorrelated," from Frostbite 3 (code here) which uses the two G terms computed slightly differently, then added together, and tested against an epsilon, vs. multiplying these together:

float G_GGX_Smith( const in float alpha, const in float dotNL, const in float dotNV ) {
float a2 = pow2( alpha );
float gl = dotNL + sqrt( a2 + ( 1.0 - a2 ) * pow2( dotNL ) );
float gv = dotNV + sqrt( a2 + ( 1.0 - a2 ) * pow2( dotNV ) );
return 1.0 / ( gl * gv );
}

float G_GGX_SmithCorrelated( const in float alpha, const in float dotNL, const in float dotNV ) {
float a2 = pow2( alpha );
float gv = dotNL * sqrt( a2 + ( 1.0 - a2 ) * pow2( dotNV ) );
float gl = dotNV * sqrt( a2 + ( 1.0 - a2 ) * pow2( dotNL ) ); // note how dotNV and dotNL are used here in the same line, vs. above
return 0.5 / max( gv + gl, EPSILON );
}

I can't say I've studied the difference. So, there's some variation in implementation, but I think this is solvable by saying "here are a few acceptable GGX implementations, we personally prefer this one," instead of "use these parameters however you want."

Some comments from a coworker at Autodesk who knows much more about shading models than I do:

I agree with Eric's overall impression that it is loosely defined and enables possible (mis-)interpretations. In my experience, even if one tries to make things 100% formal and mathematically unambiguous from the start, mismatches and tiny ambiguities can show up. Being loose makes the problems much more likely.

Roughness vs. glossiness. (User-facing) roughness is now commonly defined such that alpha (the coefficient in Beckmann, GGX, or classic Ward BRDF) is obtained as alpha = roughness^2; this convention is used by Disney, Prism materials, Substance, and possibly others. It has a reasonably "perceptually linear" feel to it, while being simple and easy to remember.

In our experience, undefined or vaguely defined glossiness is a common reason for mismatches between renderers. In my opinion, it is better to stick to roughness across the board. In other words, the metal/roughness model should ideally be the only model (and there's no reason to have "metal" in the name, since it works well for non-metallic materials).

Should all components be texturable? This can be very useful. How about bump and normal maps? Are they disallowed for now? Good to be explicit.

Metallic factor: In my opinion, this can be a useful parameter, but it should be defined precisely what fractional values mean: Linear blend? Of what exactly? A natural solution is as follows. For metals, f0 should be set to base color and diffuse to zero; for dielectrics, f0 should be set to 0.04 and diffuse to one. Fractional metallicness will then linearly blend between these cases. [this linear blend is what three.js does - Eric]

Diffuse term: the formulation from the appendix is a bit questionable, because it depends on the half-vector h, which is an inherently "specular" concept. However, it may be OK for the application.

The danger of "do what you want with these parameters" is that you can then get wildly varying rendering results. For example, the important "roughness" value is treated differently by Burley than by Walters. Burley writes, in https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf, section 5.6:

For our model, we took a hybrid approach. Given that the Smith shadowing factor is available for the primary specular, we use the G derived for GGX by Walter but remap the roughness to reduce the extreme gain for shiny surfaces. Specifically, we linearly scale the original roughness from the [0, 1] range to a reduced range, [0.5, 1], for the purposes of computing G. Note: we do this before squaring the roughness as described earlier, so the final alpha.g value is (0.5 + roughness/2)^2.

This remapping was based on comparisons with measured data as well as artist feedback that the specular was just "too hot" for small roughness values.

There's no reason for people to puzzle out whether they're using Walters or Burley roughness if a sample implementation is given - they can look and see what the code does. Equations aren't enough. The typo I pointed out in equation 5 ("(n+v)" should be added, not multiplied) would likely have been caught if put into code and tested. That said, even code can go wrong, but at least it's more likely to be correct than written equations.

One headache is gamma correction on output, or more properly, sRGB. I mentioned these problems above. In this case I'd lay out that the perfect thing would be to use sRGB output, but that there are fallback positions: basic gamma correction is fine. Using 2.0 for the gamma correction factor is not so great, but acceptable. Gamma correcting the fragments and not the final scene is an acceptable optimization for efficiency, given the limits of WebGL itself. Short and sweet; you don't have to get into the details but could simply provide references and the logic of why these choices are made.

Ultimately, even if we try to lock things down with code there could still be misinterpretations if the example program doesn't cover all possibilities. For example, if HDR file formats are later added to glTF, they could cause problems with interpretation. Values in such formats are normally stored in linear terms, so don't need gamma linearization to occur before sampling these. Someone will probably still do so, since JPEG, PNG, and GIF textures all need this operation. Still, starting out from something as well-defined as we can, vs. a loose interpretation, should give applications a chance of better fidelity, of having two viewers show the same scene in nearly the same was, as the content creator intended. Most content creators want their consumers to see the same thing as what they've created. That argues to a solid reference implementation; others might simplify or modify this reference code for various reasons - performance, or some extension they want, or compatibility with their own content creation ecosystem - but they know they're straying from the path, vs. accidentally making implementation errors.

Anyway, I suspect I'm preaching to the choir, since you want a reference implementation yourselves, but I hope this adds to your motivation to create one.

I hope these comments are taken in the spirit that I mean to give them. I think it's great that you've made a proposal. I've seen other 3D file formats have problems with loose or poor specification, so hope I have warned you sufficiently of the pitfalls.

@erich666
Copy link

erich666 commented Oct 7, 2016

In that long letter I said I didn't think you needed to specify lighting all that well, hoping that you could separate out sampling from the material model itself. But, hmmm, thinking about it, there are definitely decisions such as what mipmap level to use with various values of roughness (and how to form the mipmap levels, for that matter) for environment map lighting. So I think that's a dream on my part.

I keep circling back to "just adopt a subset of the three.js MeshStandardMaterial shader system" (with gamma required to be on) and call it a day. You get an implementation (just document which of the zillion parameters are not required for a base implementation, e.g. aoMap, emissiveMap, etc.), and three.js's model then matches glTf's and vice versa. The main dangers are:

  • The three.js guys like to improve things. They don't usually get held back by backwards compatibility issues. So what happens if/when their code changes?
  • The three.js guys are not perfect, and make their own choices (e.g. the color map affects diffuse and specular colors, but there's no way to specify that it should affect only one or the other). They also have some inertia sometime, e.g. their gamma is 2.0, not 2.2, for backward compatibility.

@mlimper
Copy link
Contributor Author

mlimper commented Oct 18, 2016

@erich666 Thanks a lot for all your comments, and for starting to work on an example application!

We also thought it would be cool to have a standalone example application that shows the path from the glTF PBR material parameters to the final shader. I did even think about a very straightforward WebGL-only app (which I did before), but in the end I got convinced that a simple Three.JS app will have far less boilerplate code, so it might be the way to got. I did commit a very dumb initial state three weeks ago, which was (and still is) just a clone of a modified Three.JS example. Our plan was actually to do something like this:

  • Provide an example glTF file, using the regular JSON encoding (for enhanced readability). The example should contain a draft spec-conformant PBR material description. Variants with both parameter sets are possible.
  • Use some custom code to load the PBR-specific parts of the glTF asset
  • Have some well-commented shader code that renders the asset
  • Use environment maps where necessary, but make clear in the comments / code structure that they do not belong to the actual PBR material description
  • If possible / necessary, provide multiple examples, which go from as-simple-as-possible to more complex ones
  • Simple example: four different spheres without textures, just constants, using different materials
  • More complex example: cerberus model with different textures

Just sharing thoughts here... very happy that someone who knows more about advanced shading models than me just picked up this topic and actually started coding ;-)

The three.js guys like to improve things. They don't usually get held back by backwards compatibility issues. So what happens if/when their code changes?

With my inital plan outlined above, I would say that doesn't matter as Three.JS would, in case of the example application, just serve as a helper library, served along with the example and intended to facilitate the creation of a context etc. - the actual processing of the glTF parameters, shaders including gamma etc. should all be done inside the actual application code, for demonstration purposes.

Does that help / make sense?

If you are interested, we could also jointly work on the example app.

@erich666
Copy link

the actual processing of the glTF parameters, shaders including gamma etc. should all be done inside the actual application code, for demonstration purposes.

Exactly, that's what I was going to aim for. Just make the glsl standalone in the program itself, isolating it from the three.js code that forms the shader. Having real code is a great thing, and it's easy to translate to other languages.

@emackey
Copy link
Member

emackey commented Oct 26, 2016

Here are some tests of PBR in Three.js I have been working on. These are not glTF format currently, but it's easy to imagine they should convert to glTF models.

@mlimper
Copy link
Contributor Author

mlimper commented Oct 30, 2016

I just built a simple PBR example in Three.JS, using a basic custom shader without any textures (following the suggestion of @erich666) - the result should exactly match with the one obtained using THREE.MeshStandardMaterial. The shader code is reduced to the necessary parts though, I also did some formatting and added comments to make it a bit more readable. The application shows a matrix of spheres with varying "roughness" / "metalness" values.

Here's the first result:
tsturm@e0e419a

And here's a screenshot:
metalness-roughness

The shader shows that the lighting computations use Lambert for the diffuse term. For the cook-torrance specular computation, they use GGX Distribution, Schlick Fresnel and Smith for the Geometry term. Having this example code and the appendix of the draft spec, we could now check how well the example aligns with the proposed formulations, and adapt the code and formulas according to what we believe should be correct and best-suited for the use within glTF. Finally, imagery generated by the example application(s) could then also be used to illustrate the impact of the different parameters.

@emackey @erich666 What do you think, does that make sense?

@mlimper
Copy link
Contributor Author

mlimper commented Oct 30, 2016

@cedricpinson What do you think about the used approximations?

frostbite Moving Frostbite to PBR (Sébastien Lagarde & Charles de Rousiers) http://blog.selfshadow.com/publications/s2014-shading-course/

I just checked that talk and I get the impression that Schlick for Fresnel and GGX for Normal Distribution are already pretty standard - they mention a specific variant of the Smith Geometry term, however ("Height-correlated Smith"). What is you experience / your opinion about these terms, do you have a specific configuration which you believe would work well?

@erich666
Copy link

erich666 commented Oct 30, 2016

Nice to see! I'm definitely interested in what you find if you use G_GGX_SmithCorrelated (see earlier in this thread) vs. G_GGX_Smith

Also, is there any noticeable difference if you use the "classic" Schlick? Epic's Schlick has the formula shown earlier, or just see their page 3. They say it looks the same and is a tiny bit faster to compute. I'd go with it, but it'd be easy to check, vs.

Original approximation by Christophe Schlick '94
float fresnel = pow( 1.0 - dotLH, 5.0 );

Finally, to make sure, you do some sort of gamma correction or sRGB at the end, right? So that all the equations are indeed being evaluated in linear space, then displayed in sRGB.

@mlimper
Copy link
Contributor Author

mlimper commented Oct 30, 2016

Finally, to make sure, you do some sort of gamma correction or sRGB at the end, right? So that all the equations are indeed being evaluated in linear space, then displayed in sRGB.

Good point - in the original shader code (which I got using the FF Shader Debugger) there was a function for that, but for some reason it was not used inside the shader generated by Three.JS (so, I assume the original output was in linear space). The function for linear2sRGB conversion, however, exists as a built-in Three.JS shader chunk - here's the code:

vec4 LinearTosRGB(in vec4 value)
 {
    return vec4(mix( pow( value.rgb, vec3( 0.41666 ) ) * 1.055 - vec3( 0.055 ),
                            value.rgb * 12.92,
                            vec3( lessThanEqual( value.rgb, vec3( 0.0031308 ) ) ) ),
                     value.w );
}

If just added a respective call at the very end of the fragment shader, so the latest version in the repo should have sRGB output now.

@mlimper
Copy link
Contributor Author

mlimper commented Oct 30, 2016

P.S.: regarding sRGB, we might still need to adapt the values read from the input colors of lights and material

@erich666
Copy link

Quick addition - thanks!

regarding sRGB, we might still need to adapt the values read from the input colors of lights and material

Yes, that's important. If the user picks a color in any UI, it should get de-sRGBed when used.

@cedricpinson
Copy link

@mlimper we use the same as UE4 from epic in notes http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf

I strongly suggest to test with environments instead of directional lights. It's a bit more complex to generate environment but it's also where it really improves the rendering quality.
I did an offline tool to pre filter environment https://github.com/cedricpinson/envtools based on UE4 code it also uses some optimizations from this post
https://placeholderart.wordpress.com/2015/07/28/implementation-notes-runtime-environment-map-filtering-for-image-based-lighting/

Realtime result is here http://osgjs.org/examples/pbr/

Ideally you would like to prefilter in 'almost' realtime, something I haven't tried yet but with texture_lod extension and float extension it should be possible.

@erich666
Copy link

I strongly suggest to test with environments instead of directional lights.

I agree (though of course this complicates any standalone version). Metal generally looks bad with just directional/point/spot lights.

Without IBL: https://s26.postimg.org/xk9aqtcpl/metal_no_ibl.jpg

With IBL: https://s26.postimg.org/9hsgvxw2h/metal_ibl.jpg

@mlimper
Copy link
Contributor Author

mlimper commented Oct 31, 2016

@cedricpinson Hey, very cool Web demo. I'll have a look. Are the images stored in some custom format (just noticed it loads some ".bin.gz" stuff, but didn't check closely)? It would be nice to have a nice environment map and an irradiance map (maybe each as a single latitude/longitude map) for our glTF demo. If you have something like that at hand, it would be much appreciated (otherwise, I suppose it will also be possible for everyone to generate something like that using your tool).

I see the benefit of having a minimalistic example, without any textures. But it is also obvious that IBL will help us to decide about the approximations we want to use, which might be hard (or impossible) when just comparing results generated with directional lights. We can provide both examples as soon as we have decided about the approximations we want to recommend, and we could proceed with a version with environment maps for now (to see how our shading / approximations perform).

The danger of playing around with environments is that you might quickly come to a point where things get mixed up: we would probably want to keep material description and lighting separate, but that becomes hard when concepts like prefiltered environment maps are used (see also the discussion in this related thread).

Ideally you would like to prefilter in 'almost' realtime, something I haven't tried yet but with texture_lod extension and float extension it should be possible.

That actually sounds like an interesting solution. This way, the related textures could be created on-the-fly, hence they would not be part of the glTF asset - one important open point already solved... ;-)

@tsturm Do you have any experience / code that does on-the-fly creation of filtered environment maps?

For the simple example, filtering environment maps inside the shader could maybe also work in real-time - but maybe just for very small resolutions / few fragments, as it will probably create an excessive number of samples for rough materials... so, unsure whether this would be feasible.

@cedricpinson
Copy link

Are the images stored in some custom format (just noticed it loads some ".bin.gz" stuff, but didn't check closely)? It would be nice to have a nice environment map and an irradiance map (maybe each as a single latitude/longitude map) for our glTF demo.

Environment is saved like this:

cubemap
everything mipmap are added a the end of a binary file, but color are split to improve gz compression. Anyway it's very easy to read it and there are some code here to read it in javascript https://github.com/cedricpinson/osgjs/blob/master/examples/pbr/EnvironmentCubeMap.js#L91

panorama
same thing but everything is on one big texture2d and mipmap are encoded in Y like this http://cedricpinson.com/ParisWebGL2014/images/panorama_inline.jpg . There is better way to encode it.

There are more informations about samples counts / encoding here http://cedricpinson.com/ParisWebGL2015/?theme=night#/11
Anyway prefiltering environment for webgl is clearly a separated subject but linked to the PBR rendering.

The danger of playing around with environments is that you might quickly come to a point where things get mixed up: we would probably want to keep material description and lighting separate, but that becomes hard when concepts like prefiltered environment maps are used (see also the discussion in this related thread).

Agree my point is more that describing materials is the easy part. Environment usage involves more tools/code but my point was more that environment is really necessary for the rendering, it does not mean it should be included in the spec for now.

@mlimper
Copy link
Contributor Author

mlimper commented Nov 1, 2016

Thanks for the links, and for the explanation! I guess we're all on the same wavelength.

For the simple example, filtering environment maps inside the shader could maybe also work in real-time - but maybe just for very small resolutions / few fragments, as it will probably create an excessive number of samples for rough materials... so, unsure whether this would be feasible.

Correcting myself here: Your OSG.JS example application offers the possibility to use importance sampling, in that case we don't need any prefiltered environment map and still get real-time performance for our example - correct? :-)

@cedricpinson
Copy link

The importance sampling was to make some reference test, but we can't really say it's realtime :) I should remove it from the example, it's probably useless now.

@erich666
Copy link

erich666 commented Nov 1, 2016

Yes, I'd be surprised (and thrilled) if non-noisy importance sampling worked in real-time. Poking briefly at the code in the sample, I don't think that toggle currently has any real effect, as Cedric notes - OSG.js doesn't use it to form its shader.

NB_SAMPLES looks like it's used, but I don't see any visual effect when I change the value (well, if I change to FLOAT and set NB_SAMPLES to 64 or higher, the shader goes wonky). I look at this example as just that, an example of PBR with environment lighting and background. The same is doable in your three.js framework, just as the MeshStandardMaterial allows you to add an environment map for lighting: click on THREE.MeshStandardMaterial in the menu and choose the envmap.

@pjcozzi pjcozzi mentioned this issue Nov 1, 2016
6 tasks
@cedricpinson
Copy link

cedricpinson commented Nov 1, 2016

You can depending on the environment content (more issue on high contrast / strong lights). Still it's doable with some optimizations https://placeholderart.wordpress.com/2015/07/28/implementation-notes-runtime-environment-map-filtering-for-image-based-lighting/
Yeah the code about importance sampling in osgjs sample should be removed.

@bghgary
Copy link
Contributor

bghgary commented Nov 10, 2016

There were some previous discussions on the differences between metallic-roughness and specular-glossiness shader implementations. I've modified @mlimper 's sample to illustrate the math for both workflows in this PR: tsturm#2

See here for a live view: https://bghgary.github.io/glTF/add-specular-glossiness-workflow/

Would love to hear your feedback.

@bghgary
Copy link
Contributor

bghgary commented Nov 17, 2016

To illustrate the differences between metallic-roughness and specular-glossiness more, I've modified the example a bit more to show how to convert from one workflow to the other one, specular-glossiness to metallic-roughness being the potentially lossy conversion (and the more difficult one).

Here is a live view of this: https://bghgary.github.io/glTF/convert-between-workflows/
If you change the values for metallic-roughness, the corresponding values for specular-glossiness will update and vice versa.

See https://github.com/bghgary/glTF/blob/dev/extensions/Vendor/FRAUNHOFER_materials_pbr/example/untextured-pbr/index.html#L572 for the metallic-roughness to specular-glossiness conversion

See https://github.com/bghgary/glTF/blob/dev/extensions/Vendor/FRAUNHOFER_materials_pbr/example/untextured-pbr/index.html#L582 for the specular-glossiness to metallic-roughness conversion

Thoughts?

@mlimper
Copy link
Contributor Author

mlimper commented Nov 17, 2016

This is a very useful example, thanks for providing it. It's really cool to see how the values of the sliders are interconnected.

A small issue: the connection between "metallic" and "diffuse" can currently be used to reach an inconsistent state (change diffuse color - change metallic - change diffuse color again).

Just as a side note, I'll try to integrate environment maps (into the other one first) ASAP. We can then copy-paste the respective parts into your new example.

@sbtron
Copy link
Contributor

sbtron commented Dec 15, 2016

Hey all
Wanted to share some sample models and example implementation of the extension loading up into Babylon: https://github.com/sbtron/BabylonJS-glTFLoader

The example glTF is using the Spec Gloss workflow:
https://github.com/sbtron/BabylonJS-glTFLoader/blob/master/Models/Telephone/PBR-SpecGloss/Telephone.gltf#L144-L151

The glTFLoader in Babylon parses this data and maps it to existing Babylon PBR constructs -
https://github.com/sbtron/BabylonJS-glTFLoader/blob/master/scripts/babylon.glTFFileLoader.js#L2039-L2118

Note that Babylon's PBR Materials support additional channels like normals which were hooked up even if the spec doesn't quite define those yet. Other engines will also support these and we should consider adding some of these into the spec as per - #699

Have the spec gloss workflow working so far. Also have some samples using metallic roughness workflow and need to add the capability to babylon's glTFLoader.

This example wires together existing engine capabilities with the PBR extension. We can pull out pieces from this to help with the reference implementation so that its easier to follow without being tied to an engine.

Babylon’s PBR documentation –
http://doc.babylonjs.com/overviews/Physically_Based_Rendering_Master

PBR Shader:
https://github.com/BabylonJS/Babylon.js/blob/master/src/Shaders/pbr.vertex.fx
https://github.com/BabylonJS/Babylon.js/blob/master/src/Shaders/pbr.fragment.fx

@pjcozzi
Copy link
Member

pjcozzi commented Dec 16, 2016

@sbtron really fantastic progress.

We can pull out pieces from this to help with the reference implementation so that its easier to follow without being tied to an engine.

This is much appreciated! Your also connected with @moneimne and @mlimper for this, yes?

@sbtron
Copy link
Contributor

sbtron commented Dec 17, 2016

@cedricpinson
Copy link

cedricpinson commented Dec 22, 2016

I extended http://osgjs.org/examples/pbr/ to support drag n drop of zip file that contains gltf models. Zip file must contain .gltf file with binary and textures.

Basically you can use files from https://sketchfab.com/features/gltf and drag them on the pbr example.

Just a note about the PBR example, it does not have tone mapping (only a linearToSrgb). If everybody agree on a simple one I can add it quickly in order to have the same as other viewer.

In order to read those gltf files we needed to added extra channels on PBR materials. A full description of our additions are listed here https://github.com/sketchfab/Unity-glTF-Exporter#pbr-materials

We should probably discuss about what we added

@pjcozzi
Copy link
Member

pjcozzi commented Dec 28, 2016

@cedricpinson really impressive progress!

it does not have tone mapping (only a linearToSrgb). If everybody agree on a simple one I can add it quickly in order to have the same as other viewer.

@erich666 may have thoughts here.

In order to read those gltf files we needed to added extra channels on PBR materials. A full description of our additions are listed here https://github.com/sketchfab/Unity-glTF-Exporter#pbr-materials

@mlimper @sbtron should these new channels be in the PBR spec? It would not be ideal to have to extend the extension for typical use cases.

@erich666
Copy link

No thoughts from me, other than having someone actually trying to use glTF PBR is a great way to debug the spec. Thanks, Cedric!

@cedricpinson
Copy link

Dont want to bother more with example but I published a new version with unity environments and gltf models that you can tests just dragging url on the on the main viewport. It's easier to test different environment/models http://osgjs.org/examples/pbr/

@mlimper
Copy link
Contributor Author

mlimper commented Jan 2, 2017

@mlimper @sbtron should these new channels be in the PBR spec? It would not be ideal to have to extend the extension for typical use cases.

That's true. @cedricpinson already posted a really nice overview of the usage of the different kinds of maps on Sketchfab:
#699 (comment)

Continuing the discussion about the maps in #699

@bghgary
Copy link
Contributor

bghgary commented Jan 5, 2017

I have updated my workflow conversion page with updated math for specular-glossiness energy conservation, merged @mlimper's environment map changes, and improved the specular-glossiness to metallic-roughness conversion.

https://bghgary.github.io/glTF/convert-between-workflows/

It should match better now.

@javagl
Copy link
Contributor

javagl commented Feb 6, 2017

Based on the code that was mentioned in #697 (comment) by @mlimper , I created a (valid) glTF 1.1? asset - maybe someone finds it useful for tests or so: https://github.com/javagl/gltfTestModels/tree/master/SimpleSpheres

@cvan
Copy link

cvan commented Feb 6, 2017

Apologies if y'all have already seen this, but check out this Vulkan engine that supports glTF: https://github.com/jian-ru/laugh_engine#gltf-support

@vorg
Copy link

vorg commented Feb 11, 2017

Another reference implementation being worked on in pure WebGL https://github.com/moneimne/WebGL-PBR

@javagl
Copy link
Contributor

javagl commented Mar 10, 2017

I don't want to be a downer, and hope that it is clear that my questions are not supposed to interrupt the group hugging that happens when intermediate or preliminary PBR results are published (often as screenshots). But I have to ask a very basic question again:

Can there be a meaningful "reference implementation"?

I mean: Is it technically possible at all?

I have re-read the related issues. There are comments by @erich666 in this issue, above

There are different approximation equations for various elements in the equations, but the basic equations and approach are not just a concept but have a lot of theory and research behind them.
One weak area of the specification is that physically-based lights are not addressed. ...
If you're creating content, you want to know that the receiver of that content is seeing the same thing on their screen as you are. ...
The danger of "do what you want with these parameters" is that you can then get wildly varying rendering results. ...

and by @emackey in a related issue

A programmable shader contains the lighting calculations, and must have been written with prior knowledge of (or parameterized inputs for) the lighting and reflection environment. ...

and finally @erich666 mentioned there:

Yes, there's the whole illumination question, point lights vs. image based lighting, where you will likely use various mipmap and pre-convolve techniques for the lighting.

These seem to raise some questions about lights. And all these questions still seem to be unsolved - at least for me. If I overlooked The Answer®, just give me a pointer.

Note that I'm specifically not referring to the lights extension, but how the light information should actually be included in the rendering process.

Particularly, one solution propsed above was

So for a reference implementation start with just a single directional light, i.e., a single light direction sample, as the lighting used in the implementation.

This may be nice for a PBR-newbie implementor (like me) to get started. But something like this won't tell other implementors anyhting about how the light computations should take place in a more realistic scenario.

I could imagine two approaches for a reference implementation:

  1. An infrastructure that assembles the GLSL vertex- and fragment shaders at runtime based on the information that it finds in the material description

  2. A reference implementation in form of an übershader, i.e. a GLSL vertex- and fragment shader that covers...

    • Metallic+Roughness textures (in whatever layout)
    • Occlusion, Normal, Emissive textures (each being optional - there are 8 combinations here)
    • Directional lights, Spot Lights, Point lights (maybe Surface/Area lights), and image based lights (with this roughness-based MIP-map-level lookup thing that does not work in all browsers), each being optional again, as the lights are only provided via an extension
    • ...
      where each feature is controlled via uniform parameters and flags

My gut feeling is: Neither of both will be possible in practice. It would only be possible to copy code snippets from the PBR documents (that are often quoted, and even more often just used as a quarry for copy-and-pasting certain functions into own shaders), and leave it to the implementor to combine them meaningfully.

So what I'm actually wondering about is the form of delivery of such a "reference implementation". Am I overlooking an obvious solution here?

@emackey
Copy link
Member

emackey commented Mar 10, 2017

@javagl These are good questions, and I don't have all the answers, but there are some areas where I have relevant information, that I'll try to convey here.

First, the project https://github.com/moneimne/WebGL-PBR is a work-in-progress by @moneimne, who is one of @pjcozzi's students. @pjcozzi and I are hoping that this will mature to eventually become the official WebGL/GLSL reference implementation for PBR in glTF 2.0. It's just now reaching a stage where it's ready for somewhat broader feedback on it, but there are still features missing. For example, currently it is still limited to a single default light source, and your comments here make it clear that we'll need to address that.

Next, I'm sure everyone who's following these PBR threads has heard mention several times of ThreeJS.MeshStandardMaterial. This is a widely-used, battle-tested material that supports PBR along with a number of unrelated options and concerns. Internally, ThreeJS builds a large shader for this material, but you won't find the GLSL in a single chunk in ThreeJS source code. It is built from blocks of GLSL combined together, #define statements that can be injected or removed, and even snippets of code that are repeated once per light source. This type of runtime construction isn't unique to ThreeJS of course, as Cesium constructs its shaders from pieces like this as well, and indeed I believe this is common practice among established rendering engines.

So when you ask about the form of delivery of a reference implementation, I don't expect its final form to be a single un-modifiable block of GLSL (even though WebGL-PBR is still at that stage as I write this). I suspect it will at the very least have #ifdef sections for maps that can be enabled or disabled, and possibly it will have chunks of code that are repeated per instance per type of light source.

But ultimately, I think a reference implementation is great to have, but I wouldn't expect any insistence that all engines follow it exactly. It's much more important, in my opinion, to follow the spirit of physically-based materials: This part is supposed to look metallic, that part is rough, etc. To put it another way: The various PBR implementations should not worry about trying to exactly look like each other, and instead they should all strive for photorealism.

Even so, a proper reference implementation will be hugely important for wider adoption outside of engines that already have their own established PBR shaders or workflows. For this, I'm hopeful that the WebGL-PBR project can grow to include some of the more configurable logic and options that the established players already use.

@javagl
Copy link
Contributor

javagl commented Mar 11, 2017

Thanks for these clarifications.

Of course I am aware of the existing WebGL-PBR implementation. And of several others. And that's the main problem (for me, and probably other possible implementors) : Some of these example implementations use fixed lights that are baked into the shaders. Other use IBL with vastly different approaches. Some support normal/occlusion/emissive mapping, and others don't.

The point is that are so many degrees of freedom, regarding the features and their exact implementation, that a reference implementation can hardly convey how to combine these features.

On the one hand, it's doable: One can dig through the related papers and documents, do some copy+paste of the functions involving Schlick, GGX etc, and bring some rendered model on the screen. But on the other hand, I hoped that a "reference implementation" would have a set of documented parameters (that correspond to the glTF material parameters) and documented functions (with references to the original definitions) that implementors can easily plug into their rendering environment to have all capabilities that are required for rendering glTF PBR, according to the standard.

(In fact, a while ago, I still was so naïve (or overly idealistic) that l thought that it should be possible to create a technique-based glTF asset that shows PBR, but still includes the actual GLSL code. My latest state for this is in https://github.com/javagl/gltfTestModels/blob/2e6c0812f7f2597315299f07891e9a939ede7532/DamagedHelmetModified/glTF/pbr.frag , but this doesn't seem to lead anywhere. Particularly as long as environment maps cannot be encoded sensibly).

It's much more important, in my opinion, to follow the spirit of physically-based materials: This part is supposed to look metallic, that part is rough, etc. To put it another way: The various PBR implementations should not worry about trying to exactly look like each other, and instead they should all strive for photorealism.

Photorealism, as the ultimate goal, narrows down the degrees of freedom for the implementation. It's then only the question of how far the different implementations deviate from this goal - and the set of features that should be supported...

// Photorealistic renderer
void main(){
    // This currently only renders an image of a black cat in a completely dark room. 
    // TODO: Cover other cases
    gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}

In that regard, I'm a mit more with Eric, and see code as the only truth here.


However, regarding the form of delivery that you described: The #ifdef-solution is certainly somewhere between the cases (runtime generation and übershader) that I mentioned. I have seen that three.js (but also babylon and others) use the runtime generation approach, which lends itself if there already is some sort of a "material model" and maybe a "lighting model" - basically, when the infrastructure already exists, but only has to be extended/adjusted for the glTF PBR case. But neither such a runtime-generator nor an übershader is something that can easily be coded from scratch.

Maybe this problem will be mitigated when there are more example implementations, showing the different features - even if there will never be the one and only reference implementation.

@pjcozzi
Copy link
Member

pjcozzi commented Jul 16, 2017

There's a reference implementation here: https://github.com/KhronosGroup/glTF-WebGL-PBR

Leaving this open until Appendix B is added.

@pjcozzi
Copy link
Member

pjcozzi commented Dec 24, 2017

Leaving this open until Appendix B is added.

Appendix B is now in the spec!

@pjcozzi pjcozzi closed this as completed Dec 24, 2017
@pjcozzi
Copy link
Member

pjcozzi commented Sep 12, 2018

For folks interested in glTF materials, we are collecting requirements for next-gen PBR in glTF, please chime in to #1442.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2.0 PBR Physically Based Rendering specification
Projects
None yet
Development

No branches or pull requests

10 participants