Home

Awesome

SCPS-NIR

Self-calibrating Photometric Stereo by Neural Inverse Rendering

Junxuan Li, and Hongdong Li. ECCV 2022.

Paper

We proposed a method for Photometric Stereo that

Keywords: Uncalibrated photometric stereo, inverse rendering, light estimation.

Results on DiLiGenT main dataset

<p align="center"> <img src='assets/cow_combined.gif' height="150"> <img src='assets/reading_combined.gif' height="150"> <img src='assets/harvest_combined.gif' height="150"> </p>

Results on Apple&Gourd

<p align="center"> <img src='assets/apple_combined.gif' height="150"> <img src='assets/gourd1_combined.gif' height="150"> <img src='assets/gourd2_combined.gif' height="150"> </p>

Results on Light Stage Date Gallery

<p align="center"> <img src='assets/helmet_front_left_combined.gif' height="150"> <img src='assets/knight_standing_combined.gif' height="150"> </p>

If you find our code or paper useful, please cite as

@inproceedings{li2022selfps,
  title={Self-calibrating Photometric Stereo by Neural Inverse Rendering},
  author={Li, Junxuan and Li, Hongdong},
  booktitle={European conference on computer vision},
  year={2022},
  organization={Springer}
}

Dependencies

First, make sure that all dependencies are in place. We use anaconda to install the dependencies.

To create an anaconda environment called scps_nir, run

conda env create -f environment.yml
conda activate scps_nir

Quick Test on DiLiGenT main dataset

Our method is tested on the DiLiGenT main dataset.

To reproduce the results in the paper, we have provided pre-computed models for quick testing. Simply run

bash configs/download_precomputed_models.sh
bash configs/test_precomputed_models.sh

The above scripts should create output folders in runs/diligent/. The results are then available in runs/diligent/*/est_normal.png for visualization.

Train from Scratch

DiLiGenT Datasets

First, you need to download the DiLiGenT main dataset and unzip the data to this folder data/DiLiGenT/.

After you have downloaded the data, run

python train.py --config configs/diligent/reading.yml

to test on each object. You can replace configs/diligent/reading.yml with to other yml files for testing on other objects.

Alternatively, you can run

bash configs/train_from_scratch.sh

This script will run and test all the 10 objects in data/DiLiGenT/pmsData/* folder. And the output is stored in runs/diligent/*.

Gourd&Apple dataset

The Gourd&Apple dataset dataset can be downloaded in here. Then, unzip the data to this folder data/Apple_Dataset/.

After you have downloaded the data, please run

python train.py --config configs/apple/apple.yml 

to test on each object. You can replace configs/apple/apple.yml with to other yml files for testing on other objects.

Using Your Own Dataset

If you want to train a model on a new dataset, you can follow the python file load_diligent.py to write your own dataloader.

Acknowledgement

Part of the code is based on Neural-Reflectance-PS, nerf-pytorch, UPS-GCNet , SDPS-Net repository.

Citation

If you find our code or paper useful, please cite as

@inproceedings{li2022selfps,
  title={Self-calibrating Photometric Stereo by Neural Inverse Rendering},
  author={Li, Junxuan and Li, Hongdong},
  booktitle={European Conference on Computer Vision},
  pages={166--183},
  year={2022},
  organization={Springer}
}