Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Luminance HDR librtprocess integration #185

Open
fcomida opened this issue Jun 14, 2019 · 84 comments

Comments

@fcomida
Copy link
Contributor

commented Jun 14, 2019

@LuminanceHDR/collaborators @heckflosse @Beep6581
I pushed the librtprocess-integration branch.
The work for integration librtprocess will be done there.

@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Jun 14, 2019

@fcomida Maybe you can use this (from my old branch) as a starting point
fbf1af6

fcomida added a commit that referenced this issue Jun 14, 2019
update the build system for locating librtprocess.
@fcomida fcomida added this to the 2.6.1 Release milestone Jun 14, 2019
fcomida added a commit that referenced this issue Jun 15, 2019
First attempt, output has to be normalized, etc...
fcomida added a commit that referenced this issue Jun 16, 2019
need more information.
@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 16, 2019

@heckflosse experimenting with demosicing algos. Do you have any links to docs/info mainly about foveon/xtrans?

@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Jun 16, 2019

@fcomida
foveon records 3 channels => no demosaic
xtrans: 3-pass is the best to resolve details, 1-pass is good-enough for noisy shots, Dual-demosaic (xtrans+fast) is not yet in librtprocess

fcomida added a commit that referenced this issue Jun 16, 2019
@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Jun 16, 2019

@fcomida plain xtrans-fast should only be used for preview. Use

markesteijn_demosaic(W, H, rawdata, r, g, b, x_trans, C.rgb_cam, callback, 1, false);

or

markesteijn_demosaic(W, H, rawdata, r, g, b, x_trans, C.rgb_cam, callback, 3, true);
@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 16, 2019

@heckflosse OK, xtrans matrix should be read/exposed by LibRaw but apparently is not. There's also a xtrans_abs to be used for image borders.

@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Jun 16, 2019

@fcomida Have a look at hdrmerge code for xtrans. It also uses libraw to decode the rafs:
https://github.com/jcelaya/hdrmerge/search?q=xtrans&unscoped_q=xtrans

Anyway, I will chime in soon. Just had to port the rt code to librtprocess last days...

@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Jun 16, 2019

@fcomida Maybe @butcherg, @aferrero2707, @CarVac can give a helping hand porting lhdr using librtprocess. They all use librtprocess in combination with libraw.

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 16, 2019

@heckflosse thank you. It seems I got it right for regular bayer filters. I'll leave xtrans for another day.

@butcherg

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2019

My libraw/librtprocess is a little off-beat; I collect the cfarray into a string in order to attach it to my internal image here:

https://github.com/butcherg/rawproc/blob/master/src/rawimage.cpp#L880

Then, I parse that string into a cfarray as expected by librtprocess in this routine:

https://github.com/butcherg/rawproc/blob/master/src/gimage.cpp#L2432

that each demosaic method uses to construct the array for calling librtprocess, like this in gImage::ApplyDemosaicXTRANSFAST():

https://github.com/butcherg/rawproc/blob/master/src/gimage.cpp#L2432

Note that xtranArray() returns NULL if the CFARRAY is not 6x6, so it also is an error check...

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2019

In Filmulator this is where I grab the color filter array stuff:

https://github.com/CarVac/filmulator-gui/blob/09306d6247ee739002b689b8b2accc947c5b2e1d/filmulator-gui/core/imagePipeline.cpp#L204

I detect whether it's X-Trans or not by recording the max value of the x-trans array, and use that to determine whether to call the Bayer demosaic algorithm(s) or the X-Trans one.

fcomida added a commit that referenced this issue Jun 17, 2019
I have a couple of Fuji RAFs for which xtrans matrix is empty so amaze
is called but output is wrong (pixelated).
@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 17, 2019

@butcherg @CarVac Thank you guys.

This Fuji S5Pro RAF doesn't work.
Fuji_S5PRO_V106_RAF

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2019

The S5 Pro is not X-Trans, it's Super CCD EXR.

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 17, 2019

@CarVac xtrans matrix is empty so it calls amaze (with regular cfa matrix).

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2019

Hmm. I actually have no clue how to properly handle that.

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 17, 2019

@CarVac Yeah, it's not a rectangular sensor. EDIT: the sensor is rect, pixels are octagonal.

@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2019

@fcomida Maybe interesting for you also on Linux:
CarVac/librtprocess#43
Can you test if adding -DCMAKE_BUILD_TYPE="Release" to the cmake command makes a difference (performancewise) on Linux?

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 17, 2019

@heckflosse Huge difference!!!

1 similar comment
@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 17, 2019

@heckflosse Huge difference!!!

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Jun 17, 2019

I'm trying to figure out how to handle Fuji SuperCCD.

@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2019

@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2019

@fcomida Franco, on Windows I get this error:

Z:/H2/lhdr26/src/Libpfs/io/rawreader.cpp:541:28: error: 'uint' was not declared in this scope; did you mean 'vint'?
  541 |             xtrans[i][j] = uint(m_processor.imgdata.idata.xtrans[i][j]);
      |                            ^~~~
      |                            vint
fcomida added a commit that referenced this issue Jun 17, 2019
@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 20, 2019

Much, much better HDRs from RAWs

hdr-old-1
OLD 1

hdr-new-1
NEW 1

hdr-old-2
OLD 2

hdr-new-2
NEW 2

hdr-old-3
OLD 3

hdr-new-3
NEW 3

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Aug 20, 2019

To me that just looks like a change in gamma (new has much more contrast). Is that really because of librtprocess, or were there other changes too?

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 20, 2019

@CarVac you are right. It's now linear as it should have been from the beginning...

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 20, 2019

@CarVac I may say integrating librtprocess into lhdr gave the opportunity for a second look on how the creation of the HDR was done.

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 20, 2019

@heckflosse @CarVac So the way to go is working in a linear colorspace

from-jpg-new-1
NEW FROM JPEG 1

from-jpg-old-1
OLD FROM JPEG 1

from-jpg-new-2
NEW FROM JPEG 2

from-jpg-old-2
OLD FROM JPEG 2

from-jpg-new-3
NEW FROM JPEG 3

from-jpg-old-3
OLD FROM JPEG 3

fcomida added a commit that referenced this issue Aug 21, 2019
avoid interference with librtprocess
@Beep6581

This comment has been minimized.

Copy link
Contributor

commented Aug 22, 2019

@fcomida "NEW 1" and "NEW 3" looked clipped in the highlights, even in the tiny histogram, while "OLD 1" and "OLD 3" are not clipped in the highlights. Are you sure the new algo is behaving correctly?

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 22, 2019

@Beep6581 I left out the normalization step...

@Beep6581

This comment has been minimized.

Copy link
Contributor

commented Aug 22, 2019

@fcomida could you show a "new" image with the normalization step, using sample set 3?

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 22, 2019

@Beep6581

hdr-new
NEW FROM JPEG 1

from-jpg-old-1
OLD FROM JPEG 1

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 22, 2019

@Beep6581 @heckflosse @CarVac What do you guys think? Robertson is better too for me.
Still issues with highlights or very dark pixels though.
new-tm
NEW TM

old-tm
OLD TM

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Aug 22, 2019

Are you applying highlight recovery only to the very brightest image, or to the merged stack?

Does the sun exhibit the same artifact in the darkest exposure only?

fcomida added a commit that referenced this issue Aug 23, 2019
Think how to handle JPEGs that miss color profile
@Beep6581

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2019

Are we using good sample images? The samples above seem to have camera movement.

The images labelled "new" also seem to ebb towards a green cast.

I can take part in testing over the weekend - which branch has the "new" code?

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 23, 2019

@CarVac highlight recovery is only applied to RAW images (each image of the stack).
@Beep6581 Previously a gamma curve was applied to RAW files, I now changed it to linear. The code is in master.

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 23, 2019

@heckflosse @CarVac have you seen #199?

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2019

You should only apply highlight recovery to the darkest image(s), or else you'll be mixing guesses with real measured data.

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 23, 2019

@CarVac Thank you. I have to make few changes then, since rawreader doesn't know about "the darkest image" and we also need those camera multipliers from the raw files, don't we?

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2019

Yes, you need the white balance multipliers applied to actually benefit from highlight reconstruction.

The best way to do this would be applying white balance to the merged stack, and then doing highlight recovery.

@heckflosse

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2019

have you seen #199?

CarVac/librtprocess@6f780ef

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 23, 2019

@CarVac and I'd need a way to differentiate RAWs from JPEGs

@Hombre57

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2019

@fcomida Can't you read the metadata ? Or did I misunderstood your question ?

Are you using librtprocess for reading/demozaicing files, or do you plan/agree to use it after the merge, e.g. by using the tone-mapping tool ?

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2019

I'm not sure the utility of supporting JPEGs at all for someone intent on maximizing their HDR quality...

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 23, 2019

@CarVac It's just that HL recovery should be now done outside the rawreader that doesn't know which image is the "darkest one" in the stack. Rawreader unpack the file, does some processing and now demosaicing via librtprocess and put the result frame in a data structure (vector of frames + other info). From there hdr fusion is done by debevec or robertson algos.
But now we have to apply HL recovery to the darkest frame. What information do I need in order to apply HL recovery correctly before creating the HDR? The data inside that vector of frames can come from jpegs as well, so my previous question about jpegs. Hope it's more clear now.

@CarVac

This comment has been minimized.

Copy link
Contributor

commented Aug 24, 2019

Once the HDR fusion is performed, all you need is:

chmax, the max of each channel after WB. This can be measured from the actual data.
clmax, the max possible value of each channel (the clipping point) after WB. This is metadata that needs to be passed along. I don't know whether Luminance HDR has this information at the end, but if I were to write an HDR program I would keep track of this so that I could simply use the minimum one as the default whitepoint when outputting.

However, highlight reconstruction can only be performed when clmax is not the same for all channels. This would be the case with JPEGs, since the data was already clipped after white balance.

@fcomida

This comment has been minimized.

Copy link
Contributor Author

commented Aug 24, 2019

@CarVac the data available as input to the hdr fusion stage is: r, g, b channels for each image of the stack, each channel is a WxH matrix of floats, then the EVs read from the Exif metadata of each image of the stack. Both RawReader and JpegReader output floating point data, JpegReader apply colorspace conversion if a profile is embedded in the input file via littlecms otherwise each pixel is just copied to the output Frame (made up of the 3 channels above) after being converted to float 0.f-1.f range. For raw files colorspace conversion is done by LibRaw.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
7 participants
You can’t perform that action at this time.