Home

Awesome

DeepShadow Shadow Extraction Model

This repository is the code implementation for ECCV 2022 paper supplementary: "DeepShadow: Neural Shape from Shadow".

The overview of our shadow and light extraction architecture is shown below:

<img src="figures/shadow_transformer.png" style="background-color: white">

Requirements

Our code was tested using Python 3.7/3.8 under Ubuntu 18.04, with GPU and/or CPU.

Dataset Used for Training

skip this if you don't want to train the model

Download Blobby and Sculptures dataset by Chen et al. (taken from here - https://github.com/guanyingc/SDPS-Net) Torch data loading code also taken from here.

Download our PhotometricStereo Shadow data - (coming soon!)

Run Inference using the model

  1. Clone the repo -
git clone https://github.com/asafkar/ps_shadow_extract.git
cd ps_shadow_extract/
  1. Download the model checkpoint
# get the checkpoint from the git lfs
git lfs install
git lfs fetch
  1. Install requirements
pip install -r requirements.txt
  1. Use the pretrained model to estimate shadows and lights directions

refer to run_model_example.ipynb

Train the model from scratch

  1. Download and unzip the data, place all 3 datasets in the same folder. Indicate the folder when training by using arg --base_dir

  2. Train the model

CUDA_VISIBLE_DEVICES=<gpus> python -m torch.distributed.run --nproc_per_node=<num_gpus> train.py --base_dir=<dir>

Citation

If you use the model or dataset in your own research, please cite:

@inproceedings{karnieli2022deepshadow,	
		title={DeepShadow: Neural shape from shadows},
		author={Asaf Karnieli, Ohad Fried, Yacov Hel-Or},	
		year={2022},	
		booktitle={ECCV},
}