-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HDR data preprocess flow before training #30
Comments
Hi,
The difference inside / outside photoshop is most likely due to an unspecified ICC profile (i.e. your image is linear, but the skimage.io <http://skimage.io/> output marks it as encoded in sRGB, which Photoshop erroneously corrects for display. This can be fixed by doing “assign color profile > linear RGB” in photoshop).
Our pretrained HDR model is known to be incompatible with the released version of the code, so you’ll have to retrain your own. Sorry about that.
Best,
Michael
… On Jul 20, 2018, at 5:17 AM, butterl ***@***.***> wrote:
Hi , @mgharbi <https://github.com/mgharbi> @jiawen <https://github.com/jiawen>
Thanks for sharing this great project,I have learned a lot from your paper and code :)
I'm going to do comparison with see in the dark <https://github.com/cchen156/Learning-to-See-in-the-Dark> and exposure <https://github.com/yuanming-hu/exposure>, but still failed to use the pretrained model. So I decide to train a HDR model(based on the great open source hdr plus data)
For data preprocess do I need to transform the DNG file to png or to NV12(camera API output) as training input?
I tried preprocess like letmaik/rawpy#12 <letmaik/rawpy#12>: with testset <https://console.cloud.google.com/storage/browser/hdrplusdata/20171106/results_20161014/0006_20160721_163256_525/>
with rawpy.imread(input_path) as raw:
im_input = raw.postprocess(output_color=rawpy.ColorSpace.raw, gamma=(1, 1), half_size=True, use_camera_wb=False, output_bps=8, user_wb=[1.0, 1.0,1.0, 1.0], no_auto_bright=True, bright=1.0, demosaic_algorithm=rawpy.DemosaicAlgorithm.LINEAR)
avgR = np.average(im_input [..., 0])
avgG = np.average(im_input [..., 1])
avgB = np.average(im_input [..., 2])
extrMult = [avgG/avgR, 1.0, avgG/avgB, 1.0]
extrBright = avgG/avgB
im_input = raw.postprocess(output_color=rawpy.ColorSpace.raw, gamma=(1, 1), half_size=True, use_camera_wb=False, output_bps=16, user_wb=extrMult, no_auto_bright=True, bright=extrBright, demosaic_algorithm=rawpy.DemosaicAlgorithm.AHD)
skimage.io.imsave(output_path, np.squeeze(im_input))
the output of preprocess “merged.dng” is like below:
<https://user-images.githubusercontent.com/3068190/42994091-afd5c40a-8c3f-11e8-87b4-60f786daaff8.png>
but with photoshop the “merged.dng” is looked as below:
<https://user-images.githubusercontent.com/3068190/42993768-ce1f2600-8c3e-11e8-887b-993864bac481.png>
I'm not sure if I do the preprocess right to meet the training pair requirements
The best I get from the pretrained model is like left(color is strange, green out,and dark in the corner ),but the HDR pic(finnal.jpg) in training pair is much better in color , the training preprocess really affect the output a lot
<https://user-images.githubusercontent.com/3068190/42993975-6bb5c3ce-8c3f-11e8-8f50-39f5cdf285c6.png>
I‘d be very thankful for any information post
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#30>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ABib39YyxwRfO89VZg6uVV7rELinqofUks5uIaAqgaJpZM4VXtne>.
|
Hi Michael, Also any suggestion on applying the lens shading map .tiff in the dataset? All the preprocessed pic have the dark corner issue and we do not find way to use the lens shading map data. And this seems could affect much of the model output(Raw from my phone do not have lens shading after same preprocess) BRs |
Hi,
Yes the cv2 write should write the correct linear data to file (though it will most likely assign it an sRGB profile, which may make it look strange in image viewers or other software).
Regarding the lens shading map, I suggest you contact the authors of the HDR+ dataset; they should know more. The data we used for our paper was taken at an intermediate step in the HDR+ pipeline, in a format no longer supported. Our bilateral learning model should be able to handle lens shading however.
Best,
Michael
… On Jul 26, 2018, at 10:18 PM, butterl ***@***.***> wrote:
Hi Michael,
Thanks for reaching out! I've got to start preprocess for own model with hdrplus data: merged.dng -> input and final.jpg -> output and saving with cv2 write to avoid sRGB issue ,will this be ok?
Also any suggestion on applying the lens shading map .tiff in the dataset? All the preprocessed pic have the dark corner issue and we do not find way to use the lens shading map data. And this seems could affect much of the model output(Raw from my phone do not have lens shading after same preprocess)
BRs
butter
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#30 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ABib3-rBuK9K_lqNRoLCQ9DE7-3WxnCvks5uKnhugaJpZM4VXtne>.
|
Hi , @mgharbi thanks for reaching out ! |
Hum, this is odd, but your model does not seem to have converged yet.
On the positive side, the validation and training curves are on-par so you’re not overfitting. Do you have some input/output/target image triplets you could share by email?
Best,
Michael
… On Aug 6, 2018, at 5:40 AM, butterl ***@***.***> wrote:
Hi , @mgharbi <https://github.com/mgharbi> thanks for reaching out !
tried with the preprocessed jpg pair and running for several days, the model is OK to learn the lens shading but the PSNR is no more than 21db with several try(the out is not good even with the training data). Do you have some suggestion on debugging the PSNR issue ? This is far from the paper's experiment result
<https://user-images.githubusercontent.com/3068190/43709526-ac124e8e-999f-11e8-99d8-52447169daf8.png>
<https://user-images.githubusercontent.com/3068190/43709322-11cb3926-999f-11e8-9e91-53a62d450724.png>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#30 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ABib37t0mZhHOCMNDFxC8DLZYUdbds2Cks5uOA8RgaJpZM4VXtne>.
|
Hi , @mgharbi thanks for this great project ! I am training with HDR Plus dataset with below training parameters(The parameters not mentioned below will be taken as default from your train.py): Adam Optimizer-epsilon=0.001 But finally the PSNR get stuck at 20-21 dB. |
@mgharbi we tried several days, but the PSNR still keeps below 21db, the seen data is better than the unseen ones(both outputs seem lack of contras and sharpness and color different?) unseen data test:0155_20160927_144914_850.jpg from hdrplus dataset unseen data test:0039_20141010_150928_913.jpg from hdrplus dataset seen data test :0006_20160721_163256_525.jpg from hdrplus dataset |
One thing I can think of is the guidance map could be degenerated, which would lead to some unused bins in the bilateral grid.
You can debug this by visualizing the bilateral coefficients produced by the model (i.e. the various h x w planes of the output [bs, h, w, grid_depth, 12] tensor), see Fig.8 in the paper. If some planes are constant = 0, that would be a symptom of this problem occurring.
Michael
… On Aug 9, 2018, at 8:43 AM, butterl ***@***.***> wrote:
@mgharbi <https://github.com/mgharbi> we tried several days, but the PSNR still keeps below 21db
test:0155_20160927_144914_850.jpg for hdrplus dataset
<https://user-images.githubusercontent.com/3068190/43899325-2484c524-9c14-11e8-989b-54c894c221dd.png>
test:
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#30 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ABib30SzNjvBl6Q_nBoQd2zKJPNMwcebks5uPC5vgaJpZM4VXtne>.
|
Hi @butterl code : skimage.io.imsave(output_path, np.squeeze(im_input)) Could you please tell me the correct method? |
@crazy0126 the code look the same, I preprocressed the data just like that |
You can debug this by visualizing the bilateral coefficients produced by the model (i.e. the various h x w planes of the output [bs, h, w, grid_depth, 12] tensor), see Fig.8 in the paper. If some planes are constant = 0, that would be a symptom of this problem occurring. I also meets this problem. how to resolve this? thank you very much. |
Thank you for the code. It's a great piece of work! Recently, I've tried to reproduce the training of HDRNet on HDR+.I think the HDR+ training set input is the 16-bit data obtained after the merged.dng part of the HDR+ data set (https://hdrplusdata.org) has undergone operations such as subtract black level and demosaic. The output of the training set is the 8bit data corresponding to the final.jpg part of the HDR+ data set. That is, what the network wants to learn is the conversion relationship corresponding to the part in the HDR+ pipeline shown in red in the figure below. But I am not sure. Would you mind telling me, is this correct? Thank you very much. |
Hi @butterl , did you reproduce the result in hdr+ dataset which is 28.8db? I have trained the model from scratch. And I met the same problem as you. Hope your reply! |
no more than 21db from my test, I think the preprocess mabe the main issue (the lens shading map in dataset maybe also need in preprocess, but I didn't success),do not sure which option maybe used in paper |
Thank you for your reply! I used the lens_shading_map.tiff in the dataset to correct the lens shading. And I think the demosaicking algorithm should be also the same as the HDR+ pipeline. Because these preprocess may affect all the subsequent results. |
Hi , @mgharbi @jiawen
Thanks for sharing this great project,I have learned a lot from your paper and code :)
I'm going to do comparison with see in the dark and exposure , but still failed to use the pretrained model. So I decide to train a HDR model(based on the great open source hdr plus data)
For data preprocess do I need to transform the DNG file to png or to NV12(camera API output) as training input?
I tried preprocess like letmaik/rawpy#12: with testset
the output of preprocess “merged.dng” is like below:
but with photoshop the “merged.dng” is looked as below:
I'm not sure if I do the preprocess right to meet the training pair requirements
The best I get from the pretrained model is like left(color is strange, green out,and dark in the corner ),but the HDR pic(finnal.jpg) in training pair is much better in color , the training preprocess really affect the output a lot
I‘d be very thankful for any information posted
The text was updated successfully, but these errors were encountered: