Skip to content

Latest commit

 

History

History
67 lines (42 loc) · 3.88 KB

README.md

File metadata and controls

67 lines (42 loc) · 3.88 KB

S2Dnet

Specular-to-Diffuse Translation for Multi-View Reconstruction
Shihao Wu 1, Hui Huang 2, Tiziano Portenier 1, Matan Sela 3, Daniel Cohen-Or 4, Ron Kimmel 3, and Matthias Zwicker 5    
1 University of Bern, 2 Shenzhen University, 3 Technion - Israel Institute of Technology, 4 Tel Aviv University, 5 University of Maryland
European Conference on Computer Vision (ECCV), 2018



Dependencies

Update 10/April/2019: The code has been updated to pytorch 0.4. A single-view synthetic dataset (75 GB) is provided, one can train pix2pix or cycleGAN on it.

To-do list:

  • Implement a single-view translation network (with multi-scale discriminator, re-convolution and pixel-normalization) and provide a testing script.

Downloading (Dropbox links)

Training example

$ python train_multi_view.py --dataroot ../huge_uni_render_rnn --logroot ./logs/job101CP --name job_submit_101C_re1_pixel --model cycle_gan --no_dropout --loadSize 512 --fineSize 512 --patchSize 256 --which_model_netG unet_512_Re1 --which_model_netD patch_512_256_multi_new --lambda_A 10 --lambda_B 10 --lambda_vgg 5 --norm pixel

Testing

Please refer to "./useful_scripts/evaluation/"

Scripts of SIFT, SMVS, and rendering are in "./useful_scripts/".

Please contact the author for more information about the code and data.