Home

Awesome

DEReD (Depth Estimation via Reconstucting Defocus Image)

Project Website arXiv

Official codes of CVPR 2023 Paper | Fully Self-Supervised Depth Estimation from Defocus Clue

Prepreation

Environment

Create a new environment and install dependencies with requirement.txt:

conda create -n dered

conda activate dered

conda install --file requirements.txt

python gauss_psf/setup.py install

Data

The generation code for NYUv2 Focal Stack dataset is provided.

The generation code for DefocusNet can be found here.

Weight

You can download the model weights trained on NYUv2 Focal Stack from here.

Usage

Train

python scripts/train.py --data_path [path/to/dataset] --dataset [Dataset] --recon_all \ 
-N [experiment_name] --use_cuda -E 1000 --BS 32 --save_checkpoint --save_best --save_last \
--sm_loss_beta 2.5 --verbose --recon_loss_lambda 1e3 --aif_blur_loss_lambda 10 \
--blur_loss_lambda 1e1 --sm_loss_lambda 1e1 --log --vis

Evaluation

python scripts/train.py --data_path [path/to/dataset] --dataset [Dataset] --recon_all \
-N [experiment_name] --use_cuda --BS 32 --save_best --verbose --eval

Acknowledgement

Parts of the code are developed from DefocusNet and UnsupervisedDepthFromFocus.

Citation

@article{si2023fully,
  title={Fully Self-Supervised Depth Estimation from Defocus Clue},
  author={Si, Haozhe and Zhao, Bin and Wang, Dong and Gao, Yupeng and Chen, Mulin and Wang, Zhigang and Li, Xuelong},
  journal={arXiv preprint arXiv:2303.10752},
  year={2023}
}

Contact Authors

Haozhe Si, Bin Zhao, Dong Wang, Xuelong Li