Skip to content

CVC-Color/DeepIntrinsicRelighting

Repository files navigation

Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture

Yixiong Yang, Hassan Ahmed Sial, Ramon Baldrich, Maria Vanrell

Arxiv

Description

This repo contains the official code, data, and video results for the paper.

save_best_19985_0556_225_35_255_255_255_70_315_1_5_Image_input.mp4

Setup

The code was tested on PyTorch 1.10.1, but it is not strict; other versions should also work.

conda create --name DIR python=3.7
conda activate DIR
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
pip install -r requirements.txt

Datasets

ISR: Intrinsic Scene Relighting Dataset

Reflectance Shading Image

You can download only the Reflectance and Shading, as the image can be calculated from them in the code.

Note: The name of the picture is: {Index of scene (part 1)}_{Index of scene (part 2)}_{pan}_{tilt}_{light temperature}_{not use}. The first two numbers represent the scene ID, while pan, tilt, and light temperature represent the lighting parameters.

RSR: Real Scene Relighting Dataset

Download

The RSR dataset was created in our lab environment. The dataset above is in 256×256 resolution, as used in our paper. If you need high-resolution or raw images, please let us know.

Updated on Nov 23, 2024: Here is the dataset with the original resolution Download.

Note:

  1. The name of the picture is: {index of picture}_{index of group (different scene or different view)}_{pan}_{tilt}_{R}_{G}_{B}_{index of scene}_{not use}_{index of view}_{index of light position}. The quantities that need attention are pan, tilt, and color (RGB), which represent the parameters of the light.
  2. The order of the lights are as follow:
5 4 3
6 1 2
7 8 9

Train

In the code, the path of datasets can be modified in the self.server_root of options/base_options.py.

Train from the scratch on the ISR dataset:

python -m torch.distributed.launch --nproc_per_node=1 --master_port 7777 train.py isr   # For ISR dataset

Continue training on other datasets (place the pre-trained model in checkpoints/{exp} with the name base_{}):

python -m torch.distributed.launch --nproc_per_node=1 --master_port 7777 train.py rsr_ours_f   # For RSR dataset
python -m torch.distributed.launch --nproc_per_node=1 --master_port 7777 train.py vidit_ours_f   # For VIDIT dataset
python -m torch.distributed.launch --nproc_per_node=1 --master_port 7777 train.py multilum_ours_f   # For Multi-illumination dataset

Note:

  1. When using the VIDIT dataset, place all images into a single folder to create a complete version, named VIDIT_full.
  2. When using the Multi-illumination dataset, crop and resize the images to a resolution of 256x256. The code for this process can be found in ./data/multi_illumination/crop_resize.py.

Test

You can download the checkpoints from this link Checkpoints, and do the tests quantitatively or qualitatively.

python test_quantitative.py --name {$exp}
python test_qualitative.py --name {$exp}

The name can be 'exp_isr', 'exp_rsr_ours_f', 'exp_vidit_ours_f' and 'exp_multilum_ours_f'.

The video results can be generated by:

python test_qualitative_animation.py --name exp_isr

More video results

save_best_201380_003_71_30_4100_158_00_Image_input_cycle_tilt.mp4
save_best_202102_008_221_35_3200_108_00_Image_input_cycle_pan.mp4

Citation

Please cite this repository as follows if you find it helpful for your project :):

@article{yang2024relighting,
  title={Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture},
  author={Yang, Yixiong and Sial, Hassan Ahmed and Baldrich, Ramon and Vanrell, Maria},
  journal={arXiv preprint arXiv:2409.18770},
  year={2024}
}

Acknowledgments

Some codes are borrowed from pix2pix. Thanks for their great works.

About

[IEEE TMM '25] Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages