Awesome
Dyna-DM: Dynamic Object-aware Self-supervised Monocular Depth Maps
<p align="center"> <img src="./misc/arch.png"/> </p>Dyna-DM: Dynamic Object-aware Self-supervised Monocular Depth Maps
[PDF]
Install
The models were trained using CUDA 11.1, Python 3.7.x (conda environment), and PyTorch 1.8.0.
Create a conda environment with the PyTorch library:
conda create -n my_env python=3.7.4 pytorch=1.8.0 torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
conda activate my_env
Install prerequisite packages listed in requirements.txt:
pip3 install -r requirements.txt
Also, ensure to install torch-scatter and torch-sparse:
pip3 install torch-scatter==2.0.8 torch-sparse==0.6.12 -f https://pytorch-geometric.com/whl/torch-1.8.0+cu111.html
Datasets
We use the datasets provided by Insta-DM and evaluate the model with the KITTI Eigen Split using the raw KITTI dataset.
Models
Pretrained models for CityScape and KITTI+CityScape are provided here, where KITTI+CityScape is trained on both CityScape and KITTI and leads to the greatest depth estimations.
Training
The models can be trained on the KITTI dataset by running:
bash scripts/train_kt.sh
Also, the models can be trained on the CityScape dataset by running:
bash scripts/train_cs.sh
The hyperparameters are defined in each script file and set at their defaults as stated in the paper.
Evaluation
We evaluate the models by running:
bash scripts/run_eigen_test.sh
References
-
Insta-DM (AAAI 2021, our baseline framework)
-
Struct2Depth (AAAI 2019, object scale loss)
-
SC-SfMLearner (NeurIPS 2019)