Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KHR_materials_sheen #1688

Merged
merged 17 commits into from
Oct 7, 2020
Merged

Conversation

sebavan
Copy link
Contributor

@sebavan sebavan commented Oct 17, 2019

The idea is to start the discussion about the sheen extension, @bghgary and I will keep the thread updated with our implementation progress.

@sebavan sebavan changed the title Starting the discussion about Sheen KHR_materials_sheen Oct 17, 2019
@sebavan
Copy link
Contributor Author

sebavan commented Oct 18, 2019

Live example in the playground: https://www.babylonjs-playground.com/frame.html#BNIZX6#4

The model is available here: https://github.com/BabylonJS/MeshesLibrary/tree/master/Sheen so you can see the setup.

@emackey
Copy link
Member

emackey commented Oct 22, 2019

Looks great. One comment, this uses the word "Factor" in a way that is not consistent with glTF's many other factors, all of which multiply with their respective textures. Textures and factors always come in pairs multiplied together. So, I would suggest a rename:

  • sheenFactor -> rename to sheenIntensityFactor
  • sheenColor -> rename to sheenColorFactor
  • sheenTexture -> keep the name, include intensity in alpha channel

Alternately the texture could be split up into sheenColorTexture and sheenIntensityTexture.

Thoughts?

@proog128
Copy link
Contributor

The sheen roughness is identical to the material roughness to simplify the configuration. sheenRoughness = materialRoughness

I am wondering why you decided to couple sheen and microfacet roughness. In my experience, sharing parameters is difficult if their visual effect is different, it breaks layering/parameter blending (see Sec. 5.7 in Disney paper). Assuming there are two materials: material A has sheenIntensity=1 and roughness=0.2, material B has sheenIntensity=0 and roughness=0.5. If an artist now creates a mask that layers material A on top of material B, for example to have a 30% semi-transparent fabric on top of a metal, the roughness behaves weird. The blended material has sheen=0.31 + 0.70=0.3 (ok) and roughness=0.30.2 + 0.70.5=0.41, shared by the sheen BRDF and the microfacet BRDF (not ok). This doesn't behave similar to BRDF blending (which is the ground-truth), in which case you would end up with 30% sheen with roughness 0.2 and 70% microfacet with roughness 0.5.

f_sheen = lightColor * sheenTerm * PI * ndotl

lightColor and ndotl shouldn't be part of the BRDF. Same for PI, I'd guess, not sure where it comes from.

sheenTerm = sheenColor * sheenIntensity * sheenDistribution * sheenVisibility;

This is different from Imageworks "Production Friendly Microfacet Sheen BRDF". Did you deviate from their solution to make it simpler to compute? What I like a lot about the Imageworks sheen is the fact that it fits nicely into the microfacet framework. They also provide a (not so cheap to evaluate) shadowing term that is properly normalized. In the end, this gives an energy conserving and reciprocal BRDF (except their terminator softening), whereas the sheen BRDF proposed here is not energy conserving and, therefore, not well-suited for GI rendering.

specularity = mix(metallicF0, baseColor, metallic)
[...]
final = f_emissive + f_diffuse + f_specular + (1 - reflectance) * f_sheen

I couldn't find a description of metallicF0. Maybe I misunderstood the weighting of diffuse/specular and sheen, and the following is wrong: This combination is simple and energy conserving. On the other hand, it may be confusing to users because the strength of the sheen effect is connected to the color of the base material. The directional-dependent albedo-scaling technique described in the Imagework paper doesn't have this problem, but is a bit more expensive to evaluate.

Maybe we could tweak the equations in this extension a bit to make it more raytracing-friendly (like the microfacet-based Imageworks model) and define it as the reference. This would allow rasterizers to do whatever approximation is best for the intended target platform and performance constraints. We plan to soon release an adaptation of the Imageworks model as part of the Enterprise PBR Material.

@sebavan
Copy link
Contributor Author

sebavan commented Oct 23, 2019

You are totally right about layering, we were only thinking about reducing the number of parameters in our implementation to help with deferred renderers for instance.

lightColor and ndotl shouldn't be part of the BRDF. Same for PI, I'd guess, not sure where it comes from.

Yes, totally, this is just the way we are using it in our code but could benefit from more clarity. I ll update it on Monday.

This is different from Imageworks "Production Friendly Microfacet Sheen BRDF". Did you deviate from their solution to make it simpler to compute?

Yes, the idea was too make it faster too compute and easier too integrate in real time rendering engines as detailed in the Filament documentation.

I couldn't find a description of metallicF0

This is basically the default F0 being 4% in metallic workflow but customizable through the proposed specular extension.

On the other hand, it may be confusing to users because the strength of the sheen effect is connected to the color of the base material. The directional-dependent albedo-scaling technique described in the Imagework paper doesn't have this problem, but is a bit more expensive to evaluate.

Yes, this is currently the main issue (aside of layering as you described above) and usually forces us to darken a lot the base color.

Maybe we could tweak the equations in this extension a bit to make it more raytracing-friendly (like the microfacet-based Imageworks model) and define it as the reference. This would allow rasterizers to do whatever approximation is best for the intended target platform and performance constraints. We plan to soon release an adaptation of the Imageworks model as part of the Enterprise PBR Material.

Would be nice if the chosen model was also really efficient for real time engines. Here, one of the nice trick by not relying on the fresnel for instance is the ability to store precomputed values in the BRDF lookup texture for the IBL part.

@bghgary
Copy link
Contributor

bghgary commented Oct 30, 2019

See #1677 (comment)

We should do the same to this extension if we decide that's the right solution.

@proog128
Copy link
Contributor

There is now a first draft (work in progress) for integrating Imagework's sheen into Enterprise PBR version 2021x. It adds two parameters in addition to sheen: sheenColor and sheenRoughness. Layering uses albedo-scaling as proposed in the paper which is energy conserving and avoids darkening. There is no Fresnel term to keep it simpler.

Optimizations for real-time rendering are still to be done. Using the Ashikhmin visibility term might be a good approximation, the error needs to be checked. Layering is an open issue.

@sebavan
Copy link
Contributor Author

sebavan commented Nov 15, 2019

About the factor renaming, what should be the final naming so that I could update the PR ?

I have seen asked to rename :

sheenFactor -> sheenIntensityFactor
sheenColor -> sheenColorFactor

is that all the changes we want ?

About the texture, I would like to keep it only one if possible merging color and intensity, do you think it makes sense ?

At least this part would not change even if we adapt the formula.

Also we should choose if the roughness needs to be a parameter as well or if it makes sense to reuse the material one to limit the number of params and be more Deferred rendering friendly ?

@bghgary
Copy link
Contributor

bghgary commented Nov 15, 2019

renaming

My opinion (assuming we use one texture):

  • intensityFactor
  • colorFactor
  • colorIntensityTexture

No need to keep putting sheen in the name since it's already under the sheen extension.

    "materials": [
        {
            "extensions": {
                "KHR_materials_sheen": {
                    "intensityFactor": 0.9,
                    "colorFactor": [
                        0.0235294117647059,
                        0.0352941176470588,
                        0.2745098039215686
                    ],
                    "colorIntensityTexture": {
                        "index": 0
                    }
                }
            }
        }
    ]

@emackey
Copy link
Member

emackey commented Nov 15, 2019

+1 for @bghgary's suggestion.

@donmccurdy
Copy link
Contributor

About the texture, I would like to keep it only one if possible merging color and intensity, do you think it makes sense?

This will prevent the use of JPG textures with this extension. Is that OK? I would like to think that Basis Universal will just become best practice for all of this, but I don't have a good sense for the pros/cons of packing these textures together with Basis.

@sebavan
Copy link
Contributor Author

sebavan commented Nov 15, 2019

Ok so I ll first update to Gary's suggestion and depending on the outcome for the rest I ll go ahead and fix accordingly.

I guess the texture packing discussion is probably bigger than only this extension :-)

I will try to have it updated for Monday.

@donmccurdy
Copy link
Contributor

I guess the texture packing discussion is probably bigger than only this extension.

Yes, good point. Happy to move that discussion elsewhere until we can come up with some best practice.

@sebavan
Copy link
Contributor Author

sebavan commented Nov 17, 2019

All factors have been renamed according to @bghgary 's proposal and Babylon.js has been updated to reflect the changes as well as the babylon sample model.

We can now focus on the need for roughness and equations to use :-)

@sebavan
Copy link
Contributor Author

sebavan commented Dec 2, 2019

@proog128, I ll try soon the albedo scaling technique as despite the extra lookup it looks quite nice.

How is the rendering quality of your analytical approach if any (thinking of smaller devices fallbacks) ?

@sebavan
Copy link
Contributor Author

sebavan commented Dec 2, 2019

Also It wonder what do you think about not splitting the roughness parameter and reusing the base layer one ?

@proog128
Copy link
Contributor

proog128 commented Dec 3, 2019

How is the rendering quality of your analytical approach if any (thinking of smaller devices fallbacks) ?

We plan to use the albedo-scaling with a precomputed texture in the WebGL renderer (probably with just one look-up: for VdotN). We do the same for mixing diffuse and specular on the base layer and performance was good enough. In the beginning we tried to use an analytic approximation instead of a texture for diffuse/specular, but there was no performance benefit.

Also It wonder what do you think about not splitting the roughness parameter and reusing the base layer one ?

Feedback from our artists was more in favor of keeping the separation because of its flexibility. It was one of the reasons for us to extend the sheen model.

In the following example, we have a material with a small sheen roughness (step, grey gradient at grazing angles), but high reflection roughness (makes the surface soft).
example

For skin or plants it's also possible to texture the reflection roughness (base layer) due to irregularities at the surface, but you still want to have uniform brightening at the rim (coming from small, uniform but perpendicular fibers on the surface).

@romainguy
Copy link

@proog128 An analytical approximation is useful not necessarily for performance but to avoid using more samplers, which are limited in numbers on mobile

I agree there definitely should be a separate sheen roughness parameter.

@proog128
Copy link
Contributor

proog128 commented Jan 10, 2020

Quick and dirty analytical approximation:

E = 0.65584461 * pow(1.0-cos_theta, 3) + 1.0 / (4.16526551 + exp(-7.97291361*sqrt(roughness)+6.33516894))

The error is around 3% (RMSE), which is quite high, but I didn't spend a lot of time on it.

@romainguy
Copy link

@proog128 Thanks, I was going to try (again) to come up with one. I'm also tempted to just put that lookup table in an extra channel of the DFG LUT (we already use the blue channel for the cloth/sheen BRDF). It would be higher resolution than needed but at least it would save a sampler.

@EliphasNUIT
Copy link

Before talking about performances, I'm concerned by how brighter V-Cavities is, even compared to the original model. The highlights are noticeably stronger, even around 0.4 roughness. Ashikmin masking feels much closer to the original paper.

If we want to ditch that look and push for the V-Cavities route, I believe we should get an artistic feedback as well, Estevez-Kulla + Smith was favored for its smooth look. V-Cavities looks definitively sharper.

As far as performances are concerned, looking at the formula, it is indeed costlier than Ashikmin masking but nowhere to a point that it'd make the proposal a no-go in my opinion. It is still cheap to evaluate.

@romainguy
Copy link

romainguy commented Aug 14, 2020 via email

@sebavan
Copy link
Contributor Author

sebavan commented Aug 14, 2020

We had a similar conclusion in Babylon than @romainguy exposed here and followed this path to already bring a bit of consistency in the ecosystem.

@proog128
Copy link
Contributor

OK, understood. I guess we can add the squared sheen roughness to the normative section without describing the "perfect" implementation in the non-normative section for now. As in the base material, implementations may choose any approximation they want, and the exact masking-shadowing term used in an implementation is certainly something that lends itself for approximations (so Ashikhmin, Smith, V-Cavities are all fine, but have different trade-offs regarding consistency, performance, physical accuracy, ...). I am somewhat optimistic that we can improve the fit given in the Estevez-Kulla paper so that it works well for low roughness values. I cannot tell at the moment when we will revisit this topic, but at some point it is probably going to happen. Once we have a better solution we can adapt the non-normative section.

@proog128
Copy link
Contributor

Would be great if we could add a sentence to the introduction that mentions the squared roughness, like in the attached patch:
roughness.zip

@sebavan
Copy link
Contributor Author

sebavan commented Aug 31, 2020

You are right, done and thanks a lot

@donmccurdy
Copy link
Contributor

donmccurdy commented Sep 14, 2020

Earlier in this thread, we wrote:

About the texture, I would like to keep it only one if possible merging color and intensity, do you think it makes sense?

This will prevent the use of JPG textures with this extension. Is that OK?

We then decided to punt on the question, which affects other textures too, as a general question for KHR_texture_basisu. That extension is now all but complete, but there's still no perfect solution for texture packing. In #1682 we concluded:

We will also need to be careful with channel decisions on the upcoming PBR next (transmission, volume, sheen, ...) texture types.

If we do someday produce a generic swizzle extension, it's presumably going to allow something like:

material.sheenTexture = inputTexture.rrrg

So, by including only a single texture slot for color and roughness, we permanently require them to share a single source texture, and a swizzle extension can't lift that requirement. As we did with clearcoat, my preference might be to allow but not require the packing arrangement. This would mean introducing another property:

  • sheenColorFactor
  • sheenColorTexture (.rgb)
  • sheenRoughnessFactor
  • sheenRoughnessTexture (.a)

This retains the capability to use only a single sampler for both textures, but also:

  1. Allows the extension to benefit from swizzle (if we create that extension later)
  2. Allows sheenRoughnessTexture to be packed with other textures (ORM?), if sheen uses a solid color, actually reducing required samplers further than the current design.

More generally — I believe our upcoming PBR Next extensions should avoid combining separate pieces of information into a single propTexture slot. By doing so, we allow the the possibility that an unoptimized material might use one more sampler, but we also allow more flexibility for optimizing the material. To me, that seems worthwhile.

@donmccurdy
Copy link
Contributor

Another advantage of splitting sheenTexture into sheenColorTexture and sheenRoughnessTexture — it's probably not uncommon for sheen to have the same hue as baseColor, or perhaps a lighter version of it. For example, the only kind of sheen color that Blender's Principled BSDF supports is a "sheen tint" — which just controls the mix of base color vs. white in the sheen reflection.

In that scenario you'd probably want sheenColorTexture and baseColorTexture to be the same texture (with different factors?) but that's not possible if the .a channel is defined to be alpha for one and sheen roughness for the other.

@sebavan
Copy link
Contributor Author

sebavan commented Sep 15, 2020

If everybody agrees, I am happy to make the change but still afraid by the number of samplers we can already not support on some devices. It is probably edge case but I wonder how we could enforce and if we should enforce a max number of samplers ?

@rsahlin
Copy link

rsahlin commented Sep 16, 2020

This retains the capability to use only a single sampler for both textures, but also:

  1. Allows the extension to benefit from swizzle (if we create that extension later)
  2. Allows sheenRoughnessTexture to be packed with other textures (ORM?), if sheen uses a solid color, actually reducing required samplers further than the current design.

I agree with this assessment, in my view it it also an engine implementation thing how the different texture channels may be packed into fewer samplers.
The above solution seems to follow the current pattern and allows flexibility from a client perspective.
Thumbs up from me :-)

@rsahlin
Copy link

rsahlin commented Sep 16, 2020

devices. It is probably edge case but I wonder how we could enforce and if we should enforce a max number of samplers ?

I am against this type of enforcement - it sort of implies that we are tied to one type of technology and I would rather that we are agnostic in this area.
At the same time we should take care to avoid duplication of channels and do everything we can to make it easy to pack channels together (in order to reduce total number of samplers used)

@donmccurdy
Copy link
Contributor

It is probably edge case but I wonder how we could enforce and if we should enforce a max number of samplers?

Not an edge case — using a reasonable number of samplers is certainly important! — but I'm worried about downstream effects of enforcing a maximum. A safer option would be similar to the skin weights language, saying that clients are required to support at least __ samplers per material, so that exporters are more conscious of the constraints. But we can't do that in this extension; it may have to be a future 2.x or 3.0 thing...

In the meantime I do think flexible packing will likely serve us better than strict pairings.

@bghgary
Copy link
Contributor

bghgary commented Sep 21, 2020

SheenCloth model is now updated: KhronosGroup/glTF-Sample-Models#266

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet