Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Critically missing linearization #25

Open
virtualritz opened this issue Jun 5, 2023 · 10 comments · May be fixed by #32
Open

Critically missing linearization #25

virtualritz opened this issue Jun 5, 2023 · 10 comments · May be fixed by #32

Comments

@virtualritz
Copy link

virtualritz commented Jun 5, 2023

I briefly looked at the code and seems there is no linearization happening before downsampling. u8/channel color data will always have some gamma for obvious reasons.

I.e. the results of the downsampling using this crate are just always wrong (as there are no u16/f16 or f32 images supported where there is enough bits to store data linearly in the pixel buffer).

See http://www.ericbrasseur.org/gamma.html for a detailed explanation.

This is the test image from above website:
gamma_dalai_lama_gray

The expected result:
gamma_dalai_lama_gray_good

And what the code in this crate produces instead:
gamma_dalai_lama_gray_result

@Jasper-Bekkers
Copy link
Member

I'll add a small note that this is true for colored images like albedo maps and the image above, but it may not be true for normal maps or roughness maps that don't contain perceptual color images (e.g. they're not in gamma space to begin with).

@virtualritz
Copy link
Author

Indeed. It also only applies to downsampling. Upsampling gamma-corrected data should be just fine.
I.e. the note would be sth like:

Do not use this to downsample images containing non-linear data, e.g. display color spaces like sRGB.

@MarijnS95
Copy link
Member

Effectively we need to ask the input "hey, are you linearized" and that is not something the example currently does, but since the downsample() API function from this crate receives a bucket of rgb(a)8 bytes annyway we might as well clarify and require that it is linearized already.

On the other hand it is quite curious that only pink lines are selected, that's not so much a missing gamma linearization (which might also be missing, FWIW) but the way we sample the source image, "picking only 1 out of 4 pixels" just like the MS Paint example on that page. But as @KYovchevski pointed out to me that could be a quirk of the Lanczos filter we use, which weights adjacent pixels negatively, then the pixels next to it positively, and so on. Meaning that for a / 2 resample, we'd always pick a pixel right on the pink line, weigh the adjacent green lines negatively, their adjacent pink lines positively, and so on, turning the image pink. This can also be demonstrated by offsetting the sampling pixel coord by 1:

int<2> i = {floor(pixel_coord.x + 0.5), floor(pixel_coord.y + 0.5)};

Turning the y component into floor(pixel_coord.y + 1.5) makes the image green.


The page also mentions that The Lanczos filter is the sole one that's mathematically perfect so I feel like we might have wrongly implemented it :)

@virtualritz
Copy link
Author

virtualritz commented Jun 9, 2023

Effectively we need to ask the input "hey, are you linearized" and that is not something the example currently does, but since the downsample() API function from this crate receives a bucket of rgb(a)8 bytes annyway we might as well clarify and require that it is linearized already.

You may want to look at colstodian or it's base, kolor, which contain types that make these guarantees and which you could enforce as inputs.
You would then split the API into a downsampling and upsampling part, I guess. The latter being unconstrained in this way.

The page also mentions that The Lanczos filter is the sole one that's mathematically perfect so I feel like we might have wrongly implemented it :)

A most likely correct implementation that could be looked at for reference is the one in OpenImageIO which matches the one found in ImageMagick in output (i.e. another OSS impl.). Speaking of which, note that the OIIO source has this comment which hints at you being in good company (Pixar, no less):

lanczos3: oiio, katana, imagemagick match. prman is far sharper
(perhaps lanczos2?). Note that Nuke calls this filter "lanczos6" (they
measure full width).

@MarijnS95
Copy link
Member

MarijnS95 commented Jun 9, 2023

@virtualritz Thank you for these resources: I'm not well-versed in image conversion algorithms but will give them a good read.

As expected, I think I have found the mistake that is happening in the code: we apply a 7x7 kernel to read source pixels and write them back to the target image, but this kernel is applied to the target texture dimensions instead of the source texture dimensions. Meaning that when we scale (normalized, in float...) coordinates from the target to the source dimensions (between which is a 2:1 ratio), we only sample odd or even rows/columns, hence resulting in this color pattern!

float<2> texel_offset = {x, y};
float<2> texel_uv = center + texel_offset * inv_target_size;

Furthermore there are also some strange clamps that will resample boundary pixels more often for edge pixels.

@MarijnS95
Copy link
Member

MarijnS95 commented Jun 9, 2023

https://github.com/Traverse-Research/ispc-downsampler/compare/lanczos3-kernel-range

Quickly cobbled together the two changes in minimal form to test this out, still a bunch TBD:

  • Understand, cleanup and remove the various "center"s and float calculations;
  • Stop clamping at the boundary edges, just skip out-of-bounds pixels?
  • Provide parameters to optionally linearize the input and output (would be efficient to do in the ISPC kernel as opposed to requiring it to be linearized from Rust).

square_test_result

@KYovchevski
Copy link
Collaborator

For the linearization: I already have a local branch that introduces new formats for gamma corrected textures, as well as a new kernel for the linearization which we can invoke depending on the format.

@virtualritz
Copy link
Author

virtualritz commented Jun 9, 2023

Stop clamping at the boundary edges, just skip out-of-bounds pixels?

All texture resizing tools I ever used (and that's a ton) require a boundary mode to be specified during downsampling. Usually for mip-map generation.

The modes are usually (specified for u & v separately, if needed).

  • Black (pixels outside the image window are zero)
  • Clamp (edge pixels repeat infintely)
  • Periodic (wrap to the other side)

The reason is that later, when you look up the texture, the texture sampler needs to know what to do and if you have an override that contradicts what was done during mip level generation, you may get artifacts.
This is often especially visible on displacement/bump/normal maps that are meant to tile (i.e. are 'periodic').

For downsampling normal images, e.g. photos you definitely want clamp though.

@MarijnS95
Copy link
Member

Hi @virtualritz, thanks for the insights once again and apologies for forgetting about this issue. I've started solving all the other problems found in this crate and have now exposed a gamma/degamma parameter to the input and output of this crate to expose this behaviour:

https://github.com/Traverse-Research/ispc-downsampler/compare/degamma

Unfortunately there's still some color shifting, and I haven't investigated whether that might be caused by our implementation of the lanczos3 filter and/or the 0.5 offset when we sample it. If I remove that offset, the current pixel is weighted 1 and the others 0, resulting in a fully pink image because only even (or odd) pixels are taken into the final result:

https://www.desmos.com/calculator/rxasm8o0ne

square_test_result

@virtualritz
Copy link
Author

This already looks leaps and bounds better than before.

The only suggestion I have is to look at/compare with the resp. filter code in OIIO.

@MarijnS95 MarijnS95 linked a pull request Aug 26, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants