Home

Awesome

S2Dnet

Specular-to-Diffuse Translation for Multi-View Reconstruction <br/> Shihao Wu<sup> 1</sup>, Hui Huang<sup> 2</sup>, Tiziano Portenier<sup> 1</sup>, Matan Sela<sup> 3</sup>, Daniel Cohen-Or<sup> 4</sup>, Ron Kimmel<sup> 3</sup>, and Matthias Zwicker<sup> 5</sup>    <br/> <sup>1 </sup>University of Bern, <sup>2 </sup>Shenzhen University, <sup>3 </sup>Technion - Israel Institute of Technology, <sup> 4 </sup>Tel Aviv University, <sup>5 </sup>University of Maryland <br/> European Conference on Computer Vision (ECCV), 2018

<p align="center"><img width="80%" src="git_img/teaser.png" /></p> <p align="center"><img width="100%" src="git_img/network.png" /></p> <br/> <br/>

Dependencies

Update 10/April/2019: The code has been updated to pytorch 0.4. A single-view synthetic dataset (75 GB) is provided, one can train pix2pix or cycleGAN on it.

To-do list:

Downloading (Dropbox links)

Training example

$ python train_multi_view.py --dataroot ../huge_uni_render_rnn --logroot ./logs/job101CP --name job_submit_101C_re1_pixel --model cycle_gan --no_dropout --loadSize 512 --fineSize 512 --patchSize 256 --which_model_netG unet_512_Re1 --which_model_netD patch_512_256_multi_new --lambda_A 10 --lambda_B 10 --lambda_vgg 5 --norm pixel

Testing

Please refer to "./useful_scripts/evaluation/"

Scripts of SIFT, SMVS, and rendering are in "./useful_scripts/".

Please contact the author for more information about the code and data.