Skip to content

NK-CS-ZZL/IAN

Repository files navigation

IAN (TIP 2022)

Introduction

This repository is the official implementation of Designing An Illumination-Aware Network for Deep Image Relighting. [Paper] [Demos]

Designing An Illumination-Aware Network for Deep Image Relighting

Zuo-Liang Zhu, Zhen Li, Rui-Xun Zhang, Chun-Le Guo, Ming-Ming Cheng

IEEE Transactions on Image Processing, 2022

Data preparation

Datasets

Normal generation on the VIDIT dataset

  • Place the one2one training data into folders ./data/one2one/train/depth, ./data/one2one/train/input, ./data/one2one/train/target
  • Place the any2any training data into folders ./data/any2any/train/depth (all '.npy' files), ./data/any2any/train/input (all RGB images)
  • Place the one2one validation data into folders ./data/validation/train/depth, ./data/validation/train/input, ./data/validation/train/target
  • Run gen_train_data.sh to obtain full training and validation data.

Quick Demo

  • Create the environment by conda env create -f environment.yml
  • Download the pretrained model on DPR dataset from the link and place them into the folder 'pretrained'.
  • Run python test.py -opt options/videodemo_opt.yml.
  • Image results will be save in the folder results.
  • You can further utilize the ffmpeg to generate demo videos as ffmpeg -f image2 -i [path_to_results] -vcodec libx264 -r 10 demo.mp4.

Train

    python train.py -opt [training config]
Dataset Guidance Config
VIDIT depth, normal, lpe* options/train_opt4b.yml
Multi-Illumination options/train_adobe_opt.yml
DPR normal, lpe options/trainany_opt4b.yml
DPR options/trainany_opt4b_woaux.yml

* The `lpe' represents our proposed linear positional encoding.

Test

    python test.py -opt [testing config]
Dataset Guidance Config Pretrained
VIDIT depth, normal, lpe options/valid_opt.yml pretrained/VIDITOne2One.pth
Multi-Illumination options/valid_adobe_opt.yml pretrained/MutliIllumination.pth
DPR normal, lpe options/vaild_any_opt.yml pretrained/PortraitWithNormal.pth
DPR options/vaild_any_opt.yml pretrained/PortraitWithoutNormal.pth

You can download all pretrained models from this Google Driver or BaiduNetDisk (pwd: 5qtp).

Citation

 @article{zhu2022ian,
    author = {Zuo-Liang Zhu, Zhen Li, Rui-Xun Zhang, Chun-Le Guo, Ming-Ming Cheng},
    title = {Designing An Illumination-Aware Network for Deep Image Relighting},
    journal = {IEEE Transactions on Image Processing},
    year = {2022},
    doi = {10.1109/TIP.2022.3195366}
}

Acknowledge

  • This repository is maintained by Zuo-Liang Zhu (nkuzhuzl [AT] gmail.com) and Zhen Li (zhenli1031 [AT] gmail.com).
  • Our code is based on a famous restoration toolbox BasicSR.

LICENSE

The code is released under Creative Commons Attribution-NonCommercial 4.0 International for non-commercial use only. Any commercial use should get formal permission first.

References

  • AIM 2020: Scene Relighting and Illumination Estimation Challenge [Webpage] [Paper]
  • NTIRE 2021 Depth Guided Image Relighting Challenge [Webpage] [Paper]
  • Deep Single Portrait Image Relighting [Github] [Paper] [Supp]
  • Multi-modal Bifurcated Network for Depth Guided Image Relighting [Github] [Paper]
  • Physically Inspired Dense Fusion Networks for Relighting [Paper]
  • LPIPS [Github] [Paper]

More demos

demo_wonormal.mp4
demo2_wonormal.mp4
demo_wnormal.mp4
demo2_wnormal.mp4

About

Designing An Illumination-Aware Network for Deep Image Relighting (TIP 2022)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published