Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDR data preprocess flow before training #30

Open
butterl opened this issue Jul 20, 2018 · 15 comments
Open

HDR data preprocess flow before training #30

butterl opened this issue Jul 20, 2018 · 15 comments

Comments

@butterl
Copy link

butterl commented Jul 20, 2018

Hi , @mgharbi @jiawen

Thanks for sharing this great project,I have learned a lot from your paper and code :)

I'm going to do comparison with see in the dark and exposure , but still failed to use the pretrained model. So I decide to train a HDR model(based on the great open source hdr plus data)

For data preprocess do I need to transform the DNG file to png or to NV12(camera API output) as training input?

I tried preprocess like letmaik/rawpy#12: with testset

with rawpy.imread(input_path) as raw: 
  im_input = raw.postprocess(output_color=rawpy.ColorSpace.raw, gamma=(1, 1), half_size=True, use_camera_wb=False, output_bps=8, user_wb=[1.0, 1.0,1.0, 1.0], no_auto_bright=True, bright=1.0, demosaic_algorithm=rawpy.DemosaicAlgorithm.LINEAR)
  avgR = np.average(im_input [..., 0])
  avgG = np.average(im_input [..., 1])
  avgB = np.average(im_input [..., 2])
  extrMult = [avgG/avgR, 1.0, avgG/avgB, 1.0]
  extrBright = avgG/avgB
  im_input = raw.postprocess(output_color=rawpy.ColorSpace.raw, gamma=(1, 1), half_size=True, use_camera_wb=False, output_bps=16, user_wb=extrMult, no_auto_bright=True, bright=extrBright, demosaic_algorithm=rawpy.DemosaicAlgorithm.AHD)
skimage.io.imsave(output_path, np.squeeze(im_input))

the output of preprocess “merged.dng” is like below:
image

but with photoshop the “merged.dng” is looked as below:
image

I'm not sure if I do the preprocess right to meet the training pair requirements

The best I get from the pretrained model is like left(color is strange, green out,and dark in the corner ),but the HDR pic(finnal.jpg) in training pair is much better in color , the training preprocess really affect the output a lot
image

I‘d be very thankful for any information posted

@mgharbi
Copy link
Owner

mgharbi commented Jul 26, 2018 via email

@butterl
Copy link
Author

butterl commented Jul 27, 2018

Hi Michael,
Thanks for reaching out! @mgharbi I've got to start preprocess for own model with hdrplus data: merged.dng -> input.jpg and final.jpg -> output.jpg and saving with cv2 write to avoid sRGB issue ,will this be ok?

Also any suggestion on applying the lens shading map .tiff in the dataset? All the preprocessed pic have the dark corner issue and we do not find way to use the lens shading map data. And this seems could affect much of the model output(Raw from my phone do not have lens shading after same preprocess)

BRs
butter

@mgharbi
Copy link
Owner

mgharbi commented Jul 31, 2018 via email

@butterl
Copy link
Author

butterl commented Aug 6, 2018

Hi , @mgharbi thanks for reaching out !
tried with the preprocessed jpg pair and running for several days, the model is OK to learn the lens shading but the PSNR is no more than 21db with several try(the out is not good even with the training data). Do you have some suggestion on debugging the PSNR issue ? This is far from the paper's experiment result(PSNR = 28.4 dB)

image

image

@mgharbi
Copy link
Owner

mgharbi commented Aug 6, 2018 via email

@ANSHUMAN87
Copy link

Hi , @mgharbi thanks for this great project !

I am training with HDR Plus dataset with below training parameters(The parameters not mentioned below will be taken as default from your train.py):

Adam Optimizer-epsilon=0.001
learning_rate=1e-3
batch_size=23
batch_norm=True
output_resolution=[512, 512]
data_pipeline=ImageFilesDataPipeline

But finally the PSNR get stuck at 20-21 dB.
I am not sure where i am doing mistake.
Would you please guide me, so that i can reach the training phase till 30dB.
For my test purpose, i am evaluating on the subset of training dataset.

@butterl
Copy link
Author

butterl commented Aug 9, 2018

@mgharbi we tried several days, but the PSNR still keeps below 21db, the seen data is better than the unseen ones(both outputs seem lack of contras and sharpness and color different?)

unseen data test:0155_20160927_144914_850.jpg from hdrplus dataset
image

unseen data test:0039_20141010_150928_913.jpg from hdrplus dataset
image

seen data test :0006_20160721_163256_525.jpg from hdrplus dataset
image

@mgharbi
Copy link
Owner

mgharbi commented Aug 9, 2018 via email

@crazy0126
Copy link

crazy0126 commented May 23, 2019

Hi @butterl
I use the method you mentioned above to postprocess the merged.png , but the ouput image is like this:
merged

code :
input_path="/root/codes/merged.dng"
with rawpy.imread(input_path) as raw:
im_input = raw.postprocess(output_color=rawpy.ColorSpace.raw, gamma=(1, 1), half_size=True, use_camera_wb=False, output_bps=8, user_wb=[1.0, 1.0,1.0, 1.0], no_auto_bright=True, bright=1.0, demosaic_algorithm=rawpy.DemosaicAlgorithm.LINEAR)
avgR = np.average(im_input [..., 0])
avgG = np.average(im_input [..., 1])
avgB = np.average(im_input [..., 2])
extrMult = [avgG/avgR, 1.0, avgG/avgB, 1.0]
extrBright = avgG/avgB
im_input = raw.postprocess(output_color=rawpy.ColorSpace.raw, gamma=(1, 1), half_size=True, use_camera_wb=False, output_bps=16, user_wb=extrMult, no_auto_bright=True, bright=extrBright, demosaic_algorithm=rawpy.DemosaicAlgorithm.AHD)

skimage.io.imsave(output_path, np.squeeze(im_input))

Could you please tell me the correct method?

@butterl
Copy link
Author

butterl commented May 27, 2019

@crazy0126 the code look the same, I preprocressed the data just like that
P.S. with self trained model, the PSNR could not reach even 22db(300k+ train) , so I suspend this experiment

@yanmenglu
Copy link

@butterl

One thing I can think of is the guidance map could be degenerated, which would lead to some unused bins in the bilateral grid.

You can debug this by visualizing the bilateral coefficients produced by the model (i.e. the various h x w planes of the output [bs, h, w, grid_depth, 12] tensor), see Fig.8 in the paper. If some planes are constant = 0, that would be a symptom of this problem occurring.

I also meets this problem. how to resolve this? thank you very much.

@addggh
Copy link

addggh commented Sep 4, 2020

Hi , @mgharbi @jiawen

Thank you for the code. It's a great piece of work!

Recently, I've tried to reproduce the training of HDRNet on HDR+.I think the HDR+ training set input is the 16-bit data obtained after the merged.dng part of the HDR+ data set (https://hdrplusdata.org) has undergone operations such as subtract black level and demosaic. The output of the training set is the 8bit data corresponding to the final.jpg part of the HDR+ data set. That is, what the network wants to learn is the conversion relationship corresponding to the part in the HDR+ pipeline shown in red in the figure below. But I am not sure. Would you mind telling me, is this correct? Thank you very much.

Screenshot from 2020-09-04 21-57-46 png

@wensongc
Copy link

Hi @butterl , did you reproduce the result in hdr+ dataset which is 28.8db? I have trained the model from scratch. And I met the same problem as you. Hope your reply!

@butterl
Copy link
Author

butterl commented May 24, 2022

Hi @butterl , did you reproduce the result in hdr+ dataset which is 28.8db? I have trained the model from scratch. And I met the same problem as you. Hope your reply!

no more than 21db from my test, I think the preprocess mabe the main issue (the lens shading map in dataset maybe also need in preprocess, but I didn't success),do not sure which option maybe used in paper

@wensongc
Copy link

Thank you for your reply! I used the lens_shading_map.tiff in the dataset to correct the lens shading. And I think the demosaicking algorithm should be also the same as the HDR+ pipeline. Because these preprocess may affect all the subsequent results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants