-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get the result of Table 2 and Table 3 #9
Comments
Hi there, That's about right, the only thing is that for display-referred the factor is 1000 (corresponding to 1000 nits) and not 1024. For scene referred it's the original image maximum. Also, in the paper it's stated that "The scaling is done to match the 0.1 and 99.9 percentiles of the predictions with the corresponding percentiles of the HDR test images". This is done to better avoid outliers (in all model predictions and in the original HDR images). This means that, e.g. for display referred, instead of matching 0 - >0 and 1 - > 1000 you match perc(prediction, 0.1) -> 1 and perc(prediction, 99.9) -> 999. This matching is simple linear interpolation. Hope this helps. |
So, for display referred
for scene-referred
Do these code look good to you? |
Yeah looks about right, and at the end then clamp/clip at 0, max.
…On Tue, 19 Nov 2019, 03:04 PK15946, ***@***.***> wrote:
So, for display referred
low, high = np.percentile(x, (0.1, 99.9)) // x could be in range of 0-1 or 0-n
y=np.interp(x, [low, high], [1, 999]).astype(np.float32)
for scene-referred
low, high = np.percentile(x, (0.1, 99.9)) // x could be in range of 0-1 or 0-n
gt_low, gt_high = np.percentile(gt, (0.1, 99.9))
y=np.interp(x, [low, high], [gt_low, gt_high]).astype(np.float32)
Do these code look good to you?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9?email_source=notifications&email_token=AB2A5HJ4V5EYEZMAH7PNHC3QUNJULA5CNFSM4JKTQ4SKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEMWA7I#issuecomment-555311229>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB2A5HJZDHYKTFMMRWZ6GSLQUNJULANCNFSM4JKTQ4SA>
.
|
Got it! Thank you for helping me out! |
No worries, glad to help! |
Hello, @dmarnerides
I notice that ExpandNet produces an image range in [0,1], and some other methods(Like HDRCNN) range in [0,n], and the paper have mentioned that there are two ways to scale the hdr images, which are display-referred and scene-referred.
So, if I get it right, for display-referred, first normalize the hdr images to [0,1] by divide n(for those range in[0,n] ), then *1024, then evaluate them.
for scene-referred, linearly scaling the outputs of ExpandNet to match the max value of the ground true exr/hdr image, then evaluate them.
Am I right?
If not, it would be very helpful if you can provide some code to get the result of Table 2/3 in the paper.
Thanks!
The text was updated successfully, but these errors were encountered: