Home

Awesome

PS-FCN

PS-FCN: A Flexible Learning Framework for Photometric Stereo, ECCV 2018, <br> Guanying Chen, Kai Han, Kwan-Yee K. Wong <br>

This paper addresses the problem of learning based photometric stereo for non-Lambertian surface. <br>

<p align="center"> <img src='images/ECCV2018_PS-FCN.png' width="800" > </p>

Changelog

Dependencies

PS-FCN is implemented in PyTorch and tested with Ubuntu 14.04, please install PyTorch first following the official instruction.

Overview

We provide:

Testing

Download the trained models

sh scripts/download_pretrained_models.sh
# You can find the downloaded model in ./data/models/

If the above command is not working, please manually download the trained models from BaiduYun (PS-FCN and UPS-FCN) and put them in ./data/models/. Please note that the checkpoint names end with '.tar', and there is no need to untar them.

Test on the DiLiGenT main dataset

# Download DiLiGenT main dataset
sh scripts/prepare_diligent_dataset.sh

# Test PS-FCN on DiLiGenT main dataset using all of the 96 image-light pairs
CUDA_VISIBLE_DEVICES=0 python eval/run_model.py --retrain data/models/PS-FCN_B_S_32.pth.tar --in_img_num 96
# You can find the results in data/Training/run_model/

# Test UPS-FCN on DiLiGenT main dataset only using images as input 
CUDA_VISIBLE_DEVICES=0 python eval/run_model.py --retrain data/models/UPS-FCN_B_S_32.pth.tar --in_img_num 96 --in_light

Training

To train a new PS-FCN model, please follow the following steps:

Download the training data

# The total size of the zipped synthetic datasets is 4.7+19=23.7 GB 
# and it takes some times to download and unzip the datasets.
sh scripts/download_synthetic_datasets.sh

If the above command is not working, please manually download the training datasets from BaiduYun (PS Sculpture Dataset and PS Blobby Dataset) and put them in ./data/datasets/.

Train PS-FCN and UPS-FCN

# Train PS-FCN on both synthetic datasets using 32 images-light pairs
CUDA_VISIBLE_DEVICES=0 python main.py --concat_data --in_img_num 32

# Train UPS-FCN on both synthetic datasets using 32 images
CUDA_VISIBLE_DEVICES=0 python main.py --concat_data --in_img_num 32 --in_light --item uncalib

# Please refer to options/base_opt.py and options/train_opt.py for more options

# You can find checkpoints and results in data/Training/

Data Normalization for Handling SVBRDFs (TPAMI)

Download the trained models

sh scripts/download_pretrained_TPAMI_models.sh
# You can find the downloaded model in ./data/models/

If the above command is not working, please manually download the trained model from BaiduYun (PS-FCN_normalize) and put it in ./data/models/.

Test on the DiLiGenT main dataset

CUDA_VISIBLE_DEVICES=0 python eval/run_model.py --retrain data/models/PS-FCN_B_S_32_normalize.pth.tar --in_img_num 96 --normalize --train_img_num 32
# You can find the results in data/Training/run_model

Training

CUDA_VISIBLE_DEVICES=0 python main.py --concat_data --in_img_num 32 --normalize --item normalize
# You can find checkpoints and results in data/Training/normalize
<br> <p align="center"> <img src='images/PS-FCN_normalize.jpg' width="700" > </p>

FAQ

Q1: How to test PS-FCN on other dataset?

Q2: Which eight sculpture shapes were used in rendering the training datasets? Why?

<p align="center"> <img src='images/sculpture_normal.png' width="600" > </p>

Q3: What should I do if I have problem in running your code?

Q4: Where can I download the Gourd&Apple dataset and Light Stage Data Gallery used in the paper?

Citation

If you find this code or the provided data useful in your research, please consider cite:

@inproceedings{chen2018ps,
  title={{PS-FCN}: A Flexible Learning Framework for Photometric Stereo},
  author={Chen, Guanying and Han, Kai and Wong, Kwan-Yee K.},
  booktitle={ECCV},
  year={2018}
  }
@article{chen2020deepps,
  title={Deep Photometric Stereo for Non-{Lambertian} Surfaces},
  author={Chen, Guanying and Han, Kai and Shi, Boxin and Matsushita, Yasuyuki and Wong, Kwan-Yee~K.},
  journal={TPAMI},
  year={2020},
}