Skip to content

xilongzhou/lookahead_svbrdf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Look-Ahead Training with Learned Reflectance Loss for Single-Image SVBRDF Estimation

This is code of "Look-Ahead Training with Learned Reflectance Loss for Single-Image SVBRDF Estimation" Project | Paper

Set up environment

To set up environment, please run this command below (tesed in Linux environment):

conda env create -f env.yml

Inference

Before running inference, please download:

  1. our pretrained model from this link
  2. centeralized MaterialGAN and our dataset with ground truth link

please save the download model to ./ckpt/ and extract data to ./dataset:

Inference on MaterialGAN's and our dataset with ground truth

Please use this command:

python meta_test.py --fea all_N1 --wN_outer 80 --gamma --cuda --test_img $mode --name $name --val_step 7 --wR_outer 5 --loss_after1 TD --Wfea_vgg 5e-2 --Wdren_outer 10 --WTDren_outer 10 --adjust_light

where $mode set as OurReal2 for our test dataset and MGReal2 for MaterialGAN dataset, $name represents the saved path. Inside the saved path, RenLPIPS and RenRMSE are the lpips and rmse value.

Here are some clarifications of saved images for each scene: fea: final SVBRDF, fea0: SVBRDF at step 0, render_#: rendered image under 8 test lightings, render_o0: rendered image under input lightings, render_t0: the input image, progressive_img: optimization process at step 0,1,2,5,7 (row 1-5).

Inference on real captured dataset without ground truth

Please first centeralized the specular highlight of input image and then run this command:

python meta_test.py --val_root $path --fea all_N1 --wN_outer 80 --gamma --cuda --test_img Real --name $name --val_step 7 --wR_outer 5 --loss_after1 TD --Wfea_vgg 5e-2 --Wdren_outer 10 --WTDren_outer 10 --adjust_light

where $path point to the directory of test real images, $name represents the saved path. Inside the saved path, the final feature maps are saved to $name\fea and optimization process at step 0,1,2,5,7 are saved to $name\pro.

Our Dataset

We also provide higher resolution version of our uncenteralized real scenes: link.

Citation

If you find this work useful for your research, please cite:

@article{zhou2022look,
  title={Look-Ahead Training with Learned Reflectance Loss for Single-Image SVBRDF Estimation},
  author={Zhou, Xilong and Kalantari, Nima Khademi},
  journal={ACM Transactions on Graphics (TOG)},
  volume={41},
  number={6},
  pages={1--12},
  year={2022},
  publisher={ACM New York, NY, USA}
}

Contact

This code is not clean version, will clean it up soon. feel free to email me if you have any questions: 1992zhouxilong@gmail.com. Thanks for your understanding!

About

this is code of "Look-Ahead Training with Learned Reflectance Loss for Single-Image SVBRDF Estimation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages