Skip to content

NICE-FUTURE/SingleHDR

 
 

Repository files navigation

Important update (2020/09/13)

Training code uploaded. Please refer to training_code folder and follow the instructions in the readme file.

Important update (2020/07/10)

The webpage and the links to the dataset will not be accessible. The temporary links to the peoject website and dataset are below:

Project website

Training data

Testing data (HDR-Synth)

Testing data (HDR-Real)

Testing data (RAISE)

Testing data (HDR-Eye)

Pre-trained weights

Sorry for the inconvenience.

[CVPR 2020] Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline

Recovering a high dynamic range (HDR) image from asingle low dynamic range (LDR) input image is challenging due to missing details in under-/over-exposed regions caused by quantization and saturation of camera sensors. In contrast to existing learning-based methods, our core idea is to incorporate the domain knowledge of the LDR image formation pipeline into our model. We model the HDR-to-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization. We then propose to learn three specialized CNNs to reverse these steps. By decomposing the problem into specific sub-tasks, we impose effective physical constraints to facilitate the training of individual sub-networks. Finally, we jointly fine-tune the entire model end-to-end to reduce error accumulation. With extensive quantitative and qualitative experiments on diverse image datasets, we demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms. The source code, datasets, and pre-trained model are available at our project website.

[Project]

Paper

Paper

Overview

This is the author's reference implementation of the single-image HDR reconstruction using TensorFlow described in: "Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline" Yu-Lun Liu, Wei-Sheng Lai, Yu-Sheng Chen, Yi-Lung Kao, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang (National Taiwan University & Google & Virginia Tech & University of California at Merced & MediaTek Inc.) in CVPR 2020. If you find this code useful for your research, please consider citing the following paper.

Further information please contact Yu-Lun Liu.

Requirements setup

  • TensorFlow

    • tested using TensorFlow 1.10.0
  • To download the pre-trained models:

Usage

  • Run your own images (using the model trained on our synthetic training data):
CUDA_VISIBLEDEVICES=0 python3 test_real.py --ckpt_path_deq ckpt_deq/model.ckpt --ckpt_path_lin ckpt_lin/model.ckpt --ckpt_path_hal ckpt_hal/model.ckpt --test_imgs ./imgs --output_path output_hdrs
  • Run your own images (using the model fine-tuned on both synthetic and real training data):
CUDA_VISIBLEDEVICES=0 python3 test_real_refinement.py --ckpt_path ckpt_deq_lin_hal_ref/model.ckpt --test_imgs ./imgs --output_path output_hdrs

Citation

[1] Yu-Lun Liu, Wei-Sheng Lai, Yu-Sheng Chen, Yi-Lung Kao, Ming-Hsuan Yang, Yung-Yu Chuang, and Jia-Bin Huang. Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
[2] Gabriel Eilertsen, Joel Kronander, Gyorgy Denes, Rafa\l Mantiuk, and Jonas Unger. HDR image reconstruction from a single exposure using deep CNNs. ACM Transactions on Graphics (TOG), 2017

Acknowledgment

Parts of the code in hallucination_net.py are folked from HDRCNN.

About

[CVPR 2020] Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 71.7%
  • HTML 25.2%
  • CSS 2.3%
  • JavaScript 0.8%