Learning High Dynamic Range from Outdoor Panoramas. ICCV 2017
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
data pre-process thetaS images Nov 6, 2017
data_processs add detailed network structure Dec 1, 2017
examples add thetaS images and finetune model Nov 6, 2017
model_DomainAdapt add finetune model Nov 6, 2017
model_Finetune add finetune model Nov 6, 2017
LICENSE Update LICENSE Oct 13, 2017
gradient_reversal.py add detailed network structure Dec 1, 2017
ldr2hdr.py
ldr2hdr_loader.py add thetaS images and finetune model Nov 6, 2017
ldr2hdr_net.py add detailed network structure Dec 1, 2017
ldr2hdr_ops.py
readme.md Update readme.md Dec 1, 2017

readme.md

ICCV 2017


This code implements the algorithm introduced in:

  • Jinsong Zhang and Jean-François Lalonde, Learning High Dynamic Range from Outdoor Panoramas, International Conference on Computer Vision (ICCV), 2017.

This code takes as input a single LDR omnidirectional panorama, and converts it to HDR automatically.

For more details, please see our project webpage: http://www.jflalonde.ca/projects/learningHDR.

Important: if you use this code, please cite the paper above!

Getting started

Execute the ldr2hdr.py to generate HDR image.

Download the data from our project webpage. The provided data contains all the LDR and HDR images used in the training and test. You may also need to download the SUN360 dataset to train the domain adaptation model. Example images can be found in the ./examples folder which have been aligned by centering the sun.

The network is defined in ldr2hdr_net.py. Two pre-trained models are also provided in the model_* folder.

Requirements

Our model is trained with TensorFlow; and OpenEXR is used to write HDR images.