Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the result of Table 2 and Table 3 #9

Closed
PK15946 opened this issue Nov 8, 2019 · 5 comments
Closed

How to get the result of Table 2 and Table 3 #9

PK15946 opened this issue Nov 8, 2019 · 5 comments

Comments

@PK15946
Copy link

PK15946 commented Nov 8, 2019

Hello, @dmarnerides

I notice that ExpandNet produces an image range in [0,1], and some other methods(Like HDRCNN) range in [0,n], and the paper have mentioned that there are two ways to scale the hdr images, which are display-referred and scene-referred.

So, if I get it right, for display-referred, first normalize the hdr images to [0,1] by divide n(for those range in[0,n] ), then *1024, then evaluate them.

for scene-referred, linearly scaling the outputs of ExpandNet to match the max value of the ground true exr/hdr image, then evaluate them.

Am I right?

If not, it would be very helpful if you can provide some code to get the result of Table 2/3 in the paper.

Thanks!

@dmarnerides
Copy link
Owner

Hi there,

That's about right, the only thing is that for display-referred the factor is 1000 (corresponding to 1000 nits) and not 1024. For scene referred it's the original image maximum.

Also, in the paper it's stated that "The scaling is done to match the 0.1 and 99.9 percentiles of the predictions with the corresponding percentiles of the HDR test images". This is done to better avoid outliers (in all model predictions and in the original HDR images).

This means that, e.g. for display referred, instead of matching 0 - >0 and 1 - > 1000 you match perc(prediction, 0.1) -> 1 and perc(prediction, 99.9) -> 999.
For scene-referred you use the original image percentiles instead of 1 and 999.

This matching is simple linear interpolation.

Hope this helps.
Best,
Demetris

@PK15946
Copy link
Author

PK15946 commented Nov 19, 2019

So, for display referred

low, high = np.percentile(x, (0.1, 99.9))   // x could be in range of 0-1 or 0-n
y=np.interp(x, [low, high], [1, 999]).astype(np.float32)

for scene-referred

low, high = np.percentile(x, (0.1, 99.9))   // x could be in range of 0-1 or 0-n
gt_low, gt_high = np.percentile(gt, (0.1, 99.9))
y=np.interp(x, [low, high], [gt_low, gt_high]).astype(np.float32)

Do these code look good to you?

@dmarnerides
Copy link
Owner

dmarnerides commented Nov 19, 2019 via email

@PK15946
Copy link
Author

PK15946 commented Nov 19, 2019

Got it! Thank you for helping me out!

@dmarnerides
Copy link
Owner

No worries, glad to help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants