Home

Awesome

Fast Light-Weight Near-Field Photometric Stereo, CVPR 2022.

Daniel Lichy, Soumyadip Sengupta, David Jacobs

<!--<figure class="image"> <img src="./media/teaser2.png"> <figcaption>{bare-bones capture setup}</figcaption> </figure>--> <p> <img src="./media/teaser_v3.png" alt> <!--<em></em>--> </p>

Overview

This is the official code release for the paper Fast Light-Weight Near-Field Photometric Stereo.

We provide:

Dependencies

This project uses the following dependencies:

The easiest way to run the code is by creating a virtual environment and installing the dependences with pip e.g.

# Create a new python3.8 environment named fastnfps
python3 -m venv fastnfps

# Activate the created environment
source fastnfps/bin/activate

#upgrade pip
pip install --upgrade pip

# To install dependencies 
python -m pip install -r requirements.txt


Test on the LUCES dataset

Download the LUCES dataset from [https://www.toshiba.eu/pages/eu/Cambridge-Research-Laboratory/rm/Luces_dataset.zip] and unzip it.

Then run:


python eval_luces.py <output_dir> --gpu --checkpoint pretrained_weights/cvpr2022.pth --luces_dataset_root <path to luces data>/data

# To test on LUCES using the lighting calibration network instead of the ground truth light calibration add the arguments --uncalibrated --calib_net_checkpoint
python eval_luces.py <output_dir> --gpu --checkpoint pretrained_weights/cvpr2022.pth --luces_dataset_root <path to luces data>/data --uncalibrated --calib_net_checkpoint pretrained_weights/cal_cvpr2022.pth

Test on our dataset

Download our dataset from [https://drive.google.com/file/d/1_VoPueYtShclhTAu-zxVts18P5R7LaCl/view?usp=sharing] and unzip it.

Then run:


python eval_standard.py <output_dir> --gpu --checkpoint pretrained_weights/cvpr2022.pth --uncalibrated --calib_net_checkpoint pretrained_weights/cal_cvpr2022.pth --dataset_root <path to our dataset>

Test on your own uncalibrated dataset

The easiest way to test on your own dataset is to format it similarly to our dataset:

dataset_dir:

For an example of formating your own dataset please look at our dataset

Then run:

python eval_standard.py <output_dir> --gpu --checkpoint pretrained_weights/cvpr2022.pth --uncalibrated --calib_net_checkpoint pretrained_weights/cal_cvpr2022.pth --dataset_root <path to your dataset>

Training

Download our synthetic data from [https://drive.google.com/file/d/1ofQrSup0BrZKs456SuMZW84yBbIP1jrq/view?usp=sharing] and unzip it. Download the MERL BRDF dataset from [https://cdfg.csail.mit.edu/wojciech/brdfdatabase].

To train the main network from scratch run:

python train.py <log_dir>  --gpu --syn_dataset_root <path to our synthetic dataset> --merl_path <path to merl dataset> --batch_size 8 --num_train_lights 10

To train the calibration network from scratch run:

python train_calibration_net.py <log_dir> --gpu --syn_dataset_root <path to our synthetic dataset> --merl_path <path to merl dataset> --batch_size 16 --num_train_lights 10

FAQ

Q1: What should I do if I have problem running your code?

Citation

If you find this code or the provided models useful in your research, please cite it as:

@inproceedings{lichy_2022,
  title={Fast Light-Weight Near-Field Photometric Stereo},
  author={Lichy, Daniel  and Sengupta, Soumyadip and Jacobs, David W.},
  booktitle={CVPR},
  year={2022}
}

Acknowledgement