Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PMREMGenerator: set internal/output render targets as RGBE. #19087

Merged
merged 1 commit into from
Apr 25, 2020

Conversation

sciecode
Copy link
Contributor

@sciecode sciecode commented Apr 8, 2020

Fixes #19053

Based on #19053 (comment) and #19053 (comment)

This PR introduces the possibility for the output texture generated by PMREM to have different type than the inputted texture. This is important for mobile support, which often lacks the ability to render to floating point textures.

See #19087 (comment) and #19087 (comment)

@sciecode
Copy link
Contributor Author

sciecode commented Apr 8, 2020

If we decide to merge, it is important to notify on changelogs about the default output type changing.

@WestLangley
Copy link
Collaborator

I think PMREMGenerator could be smart about setting the filtering. We can avoid using NearestFilter with Float and HalfFloat data types.

@sciecode
Copy link
Contributor Author

sciecode commented Apr 9, 2020

I think PMREMGenerator could be smart about setting the filtering. We can avoid using NearestFilter with Float and HalfFloat data types.

I'm ok with that, @elalish would that cause any problems that we are not considering?

@polarathene
Copy link

HalfFloatType - > PMREMGenerator -> UnsignedByteType Example
FloatType - > PMREMGenerator -> UnsignedByteType Example

Tested on my Android phone Huawei Mate 10 Pro, purchased in 2018 for around 600 USD? So not the best but not budget either.

Firefox and Chrome browsers fail both examples. UI renders, but the 3D content appears black.
Firefox Focus(which uses a different web engine from the main Firefox product afaik) can display the 1st example, but also fails the 2nd.

@mrdoob mrdoob added this to the r116 milestone Apr 9, 2020
@elalish
Copy link
Contributor

elalish commented Apr 9, 2020

@sciecode @WestLangley It's true you could avoid NearestFilter with float types, but the only reason to do so is if you change the shaders to not do custom interpolation. Of course this will mean another branch in the bilinearCubeUV shader fragment based on type (perhaps you could use a #define instead). I'm also curious how much performance improvement this will be; it saves the RGBE decodes and lets the compiler take care of interpolation, but needs to push more memory through the cache.

Also, why do you expect iphones to fail your halfFloat test? It seems like we should test this on quite a few devices before making this change.

Oh, and what exactly is the change to the default output type?

@WestLangley
Copy link
Collaborator

@bhouston said:

But I will warn you, it turns out that doing live HDR decodings from 8-bit per channel RGBA textures is highly problematic. I should have just converted textures with HDR encodings to true half (fp16) textures on load, rather than doing live decoding.

@sciecode
Copy link
Contributor Author

sciecode commented Apr 9, 2020

Also, why do you expect iphones to fail your halfFloat test?.

iPhones don't seem to be able to decode HalfFloatType from Uint16Array data.
I know it happens up until 8, not sure about newer models.

image

Oh, and what exactly is the change to the default output type?

Previously the default behavior was to copy input texture parameters, in the current state it defaults to UnsignedByteType RGBEFormat RGBEEncoding, unless explicitly changed with setDataType

@elalish
Copy link
Contributor

elalish commented Apr 9, 2020

@sciecode Wait, hold on, I didn't grok this the first time through. If you set data type to float, you're changing all the render targets to float type, which doesn't work on most mobile devices. I thought the idea here was to just accept float input and still use the same 8-bit internal formats for compatibility.

@sciecode
Copy link
Contributor Author

sciecode commented Apr 9, 2020

Well, the idea was to increase support. Let me explain.

With dev version, PMREM only works on mobile if we input an RGBE 8-bit texture that outputs to RGBE as well.
With this version, PMREM should work on every inputted texture format/type ( as long as device can read from floating point texture ), but we still need to output to RGBE.

In other words, we are increasing the support of inputted texture formats, not outputted. If we want to output as HalfFloatType or FloatTypeit still won't work, not much we can do about that.

@WestLangley
Copy link
Collaborator

WestLangley commented Apr 10, 2020

@elalish

How do you feel about this?

  1. Support the following image formats as input:
    (a) Radiance HDR
    (b) OpenEXR
    (c) JPG/PNG

  2. Internally, render targets will be UnsignedByteType .

  3. Output to a texture having user-specified optional data type:
    (a) UnsignedByteType, the default, with RGBEEncoding, NearestFilter
    (b) Float or HalfFloat, with LinearEncoding, LinearFilter

In the case of 3(a), RGBEEncoding is only required for HDR, otherwise LinearEncoding -- not sRGB.

I discussed this offline with @sciecode, and I think we agree that this makes sense.

Any "gotchas" we are missing?

@elalish
Copy link
Contributor

elalish commented Apr 10, 2020

How do you feel about this?

  1. Support the following image formats as input:
    (a) Radiance HDR
    (b) OpenEXR
    (c) JPG/PNG
  2. Internally, render targets will be UnsignedByteType .
  3. Output to a texture having user-specified optional data type:
    (a) UnsignedByteType, the default, with RGBEEncoding, NearestFilter
    (b) Float or HalfFloat, with LinearEncoding, LinearFilter

In the case of 3(a), RGBEEncoding is only required for HDR, otherwise LinearEncoding -- not sRGB.

I discussed this offline with @sciecode, and I think we agree that this makes sense.

Any "gotchas" we are missing?

@WestLangley 1 and 2 sound good. Number 3b has the same gotchas as number 2; in order to output a float texture you have to be able to render to a float texture, which mobiles can't reliably do. And in order to make use of linear filtering you'll need to change the cube_uv shader fragment to remove the custom interpolation. Hence my suggestion to save those for later and always go with 3a for now.

@WestLangley
Copy link
Collaborator

in order to output a float texture you have to be able to render to a float texture

@elalish Why not create a float data texture and populate it from the RGBE Uint internal render target?

@elalish
Copy link
Contributor

elalish commented Apr 10, 2020

@elalish Why not create a float data texture and populate it from the RGBE Uint internal render target?

@WestLangley Do you mean by going through the CPU? Or is there another way to convert from RGBE to float texture without writing to a renderbuffer? I may just be unaware.

@sciecode
Copy link
Contributor Author

sciecode commented Apr 11, 2020

And in order to make use of linear filtering you'll need to change the cube_uv shader fragment to remove the custom interpolation. Hence my suggestion to save those for later and always go with 3a for now.

Ok, I understand now. cube_uv_reflection_fragment was refactored to use the custom bilerp for mipmapped envmap, LinearFilter will cause incorrect sampling of the envMap even after the PMREM pipeline. I thought the problem was only about the internal RTT pipeline.

Now your comment about using hardware lerp vs shader lerp makes more sense to me.

In that case, I say let's go with your proposal (1), (2) and (3a).

I can study the impact of using LinearFilter later on, but I don't expect the performance / visual difference will be worth trouble of doing it. We might prefer going the route of allowing bigger cubemap output then 256x256 based on input size, as you said.

@WestLangley
Copy link
Collaborator

@elalish @sciecode Yes, let's do the things we agree on. :-)

@sciecode sciecode changed the title PMREMGenerator: add support for setDataType PMREMGenerator: set internal/output render targets as RGBE. Apr 11, 2020
@sciecode
Copy link
Contributor Author

I think this one is good to go, aye? 👍

@mrdoob
Copy link
Owner

mrdoob commented Apr 12, 2020

@elalish @WestLangley 👌?

@WestLangley
Copy link
Collaborator

As I proposed in #19087 (comment), I would only output RGBE if the input texture is HDR. Otherwise, the output should be Linear.

@elalish
Copy link
Contributor

elalish commented Apr 13, 2020

As I proposed in #19087 (comment), I would only output RGBE if the input texture is HDR. Otherwise, the output should be Linear.

@WestLangley @sciecode Sorry, but I again have to disagree here; I would approve this without the last commit this comment added. If we have input in an 8-bit nonlinear encoding (sRGB or Gamma), and we encode it as 8-bit linear, we get significant quantization loss. I know it would be nice to not convert back and forth all the time, but without writing floats, there's just not a good way around it.

@sciecode
Copy link
Contributor Author

sciecode commented Apr 13, 2020

Yeah, there is a great deal of quantization loss actually, especially in the dark areas.

RGBE
RGBE

Linear
Linear

@sciecode
Copy link
Contributor Author

sciecode commented Apr 13, 2020

Let's just shake on RGBE for everything for now, just so we have support for multiple input textures types, and later on we can continue discussing improvements we can do. @WestLangley 👍 ?

@elalish
Copy link
Contributor

elalish commented Apr 13, 2020

Yeah, there is a great deal of quantization loss actually.

@sciecode Is there a chance you pasted the same image twice?

@sciecode
Copy link
Contributor Author

sciecode commented Apr 13, 2020

Nope, I can see clearly the "salt and pepper" difference in the Linear one. Especially on the darker street area.

I do have a very high contrast on my monitor, try adjusting a bit in yours and I guarantee you're gonna see it.

@bhouston
Copy link
Contributor

bhouston commented Apr 13, 2020

A couple years ago I quantified the loss when going from gamma to linear in a 256 byte wide color space:
https://docs.google.com/spreadsheets/d/1sEhW8QeilNtsjcgd0fX1dcoO3L1p4Ow-U9v96I0aYiM/edit?usp=drive_web&ouid=117151377272934601196

You start with 256 unique intensities and end up with just 184. See above spreadsheet.

The trick is to not re-encode these from gamma to linear in 8-bit per channel textures, but rather do inline decoding.

I had made this spreadsheet to communicate how evil the GammaPass was -- because it took as input one 8-bit per channel texture and converted it to another 8-bit per channel texture. Instead one should do inline gamma encoding when the value is still in its full range float format. The same rule applies to decoding.

@sciecode
Copy link
Contributor Author

Another one for the non-believers 😁

RGBE
RGBE

Linear
Linear

@sciecode
Copy link
Contributor Author

@bhouston thanks for the input.

When you say inline decoding, you mean directly on the shader when reading the target texture correct?

@bhouston
Copy link
Contributor

When you say inline decoding, you mean directly on the shader when reading the target texture correct?

Yes, I mean that if you have an sRGB texture, leave it in sRGB format until you are in the shader itself, then decode it directly to floating point values. If you rencode an 8-bit sRGB texture into a 8-bit linear texture, you've just lost 25% of your possible color values, thus you lost you lost accuracy. The same goes for any conversion between sRGB, gamma and linear - never do this unnecessarily between 8-bit per channel textures.

@WestLangley
Copy link
Collaborator

Thank you everybody.

@elalish Please correct if it is still not what you support.

Revised specification:

  1. Support the following image formats as input:
    (a) Radiance HDR
    (b) OpenEXR
    (c) JPG/PNG

  2. Internal render targets are UnsignedByteType .

  3. Internal processing is in linear color space for HDR/EXR, and in sRGB color space for JPG/PNG.

  4. PMREM output is:
    (a) for HDR/EXR: UnsignedByteType, RGBEEncoding, NearestFilter
    (b) for JPG/PNG: UnsignedByteType, sRGBEncoding, NearestFilter

@elalish
Copy link
Contributor

elalish commented Apr 13, 2020

@WestLangley Yes, agreed.

@WestLangley
Copy link
Collaborator

@mrdoob We can merge this if @elalish approves.

Copy link
Contributor

@elalish elalish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for bearing with me!

@WestLangley
Copy link
Collaborator

Thank you @bhouston and @elalish for your helpful feedback!

@sciecode
Copy link
Contributor Author

Thank you guys ❤️
It's better to have long discussions and reach the correct solution, than having to patch previous PRs over and over.

@WestLangley
Copy link
Collaborator

@mrdoob We have consensus! :-)

@bhouston
Copy link
Contributor

And we will be not support the RGBM-encoded PNG workflows going forward right? Like this old example supports: https://threejs.org/examples/#webgl_materials_envmaps_hdr

I am okay with this personally.

@sciecode
Copy link
Contributor Author

sciecode commented Apr 15, 2020

Well, depends what you mean by support, PMREMGenerator can correctly read RGBM and output a readable render target, the only difference is that the output will be in RGBE instead of RGBM.

@bhouston
Copy link
Contributor

Well, depends what you mean by support, PMREMGenerator can correctly read RGBM and output a readable render target, the only difference is that the output will be in RGBE instead of RGBM.

Good enough for me. I support this consensus. This is a greatly simplification and imposition of smart best practice.

@WestLangley
Copy link
Collaborator

@mrdoob /bump

@sciecode
Copy link
Contributor Author

sciecode commented Apr 24, 2020

Anything in specific holding this PR back, @mrdoob?

I know there's been a lot discussion, but I can try to summarize what's been discussed in order to make it easier to review.

Current PMREMGenerator, on dev, tries to match the input texture settings to the internal and output render targets, this works fine for devices that are able to render to floating point framebuffers, however it fails on devices that aren't ( some mobiles ) and the incoming texture is in floating point.

In order to allow broader use of input textures on all devices, we now enforce UnsignedByteType for the internal/output render targets. We use RGBEEncoding for HDR input textures and we keep the same encoding for the case of LDR input textures. This greatly improves the overall support and allows PMREMGenerator to work properly in most workflows.

Hopefully this helps.

@mrdoob mrdoob merged commit 1853f27 into mrdoob:dev Apr 25, 2020
@mrdoob
Copy link
Owner

mrdoob commented Apr 25, 2020

Thanks everyone! 🙏

@sciecode sciecode deleted the dev-pmrem-data branch April 25, 2020 00:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

EXR Envmap not working on iOS
6 participants