Home

Awesome

SC-Depth

Pleaser refer to our new implementation of SC-Depth (V1, V2, and V3) at https://github.com/JiawangBian/sc_depth_pl

This codebase implements the SC-DepthV1 described in the paper:

Unsupervised Scale-consistent Depth Learning from Video

Jia-Wang Bian, Huangying Zhan, Naiyan Wang, Zhichao Li, Le Zhang, Chunhua Shen, Ming-Ming Cheng, Ian Reid

IJCV 2021 [PDF]

This is an extended version of NeurIPS 2019 [PDF] [Project webpage]

Point cloud visulization on KITTI (left) and real-world data (right)

<img src="https://jwbian.net/wp-content/uploads/2020/06/77CXZX@H37PIWDBX0R7T.png" height="300"> <img src="https://jwbian.net/wp-content/uploads/2020/06/UFIEB960XK6V82H2UN6P25.png" height="300">

Dense Voxel reconstruction (left) using the estimated depth (bottom right)

reconstruction demo

Contributions

  1. A geometry consistency loss, which makes the predicted depths to be globally scale consistent.
  2. A self-discovered mask, which detects moving objects and occlusions for boosting accuracy.
  3. Scale-consistent predictions, which can be used in the Monocular Visual SLAM system.

If you find our work useful in your research please consider citing our paper:

@article{bian2021ijcv, 
  title={Unsupervised Scale-consistent Depth Learning from Video}, 
  author={Bian, Jia-Wang and Zhan, Huangying and Wang, Naiyan and Li, Zhichao and Zhang, Le and Shen, Chunhua and Cheng, Ming-Ming and Reid, Ian}, 
  journal= {International Journal of Computer Vision (IJCV)}, 
  year={2021} 
}

Updates (Compared with NeurIPS version)

Note that this is an improved version, and you can find the NeurIPS version in 'Release / NeurIPS Version' for reproducing the results reported in paper. Compared with NeurIPS version, we (1) Change networks by using Resnet18 and Resnet50 pretrained model (on ImageNet) for depth and pose encoders. (2) Add 'auto_mask' by Monodepth2 to remove stationary points. (3) Integrate the depth and pose prediction into the ORB-SLAM system. (4) Add training and testing on NYUv2 indoor dataset. See Unsupervised-Indoor-Depth for details.

Preamble

This codebase was developed and tested with python 3.6, Pytorch 1.0.1, and CUDA 10.0 on Ubuntu 16.04. It is based on Clement Pinard's SfMLearner implementation.

Prerequisite

pip3 install -r requirements.txt

Datasets

See "scripts/run_prepare_data.sh".

For KITTI Raw dataset, download the dataset using this script http://www.cvlibs.net/download.php?file=raw_data_downloader.zip) provided on the official website.

For KITTI Odometry dataset, download the dataset with color images.

Or you can download our pre-processed dataset from the following link

kitti_256 (for kitti raw) | kitti_vo_256 (for kitti odom) | kitti_depth_test (eigen split) | kitti_vo_test (seqs 09-10)

Training

The "scripts" folder provides several examples for training and testing.

You can train the depth model on KITTI Raw by running

sh scripts/train_resnet18_depth_256.sh

or train the pose model on KITTI Odometry by running

sh scripts/train_resnet50_pose_256.sh

Then you can start a tensorboard session in this folder by

tensorboard --logdir=checkpoints/

and visualize the training progress by opening https://localhost:6006 on your browser.

Evaluation

You can evaluate depth on Eigen's split by running

sh scripts/test_kitti_depth.sh

evaluate visual odometry by running

sh scripts/test_kitti_vo.sh

and visualize depth by running

sh scripts/run_inference.sh

Pretrained Models

Latest Models

To evaluate the NeurIPS models, please download the code from 'Release/NeurIPS version'.

Depth Results

KITTI raw dataset (Eigen's splits)

ModelsAbs RelSq RelRMSERMSE(log)Acc.1Acc.2Acc.3
resnet180.1190.8574.9500.1970.8630.9570.981
resnet500.1140.8134.7060.1910.8730.9600.982

NYUv2 dataset (Original Video)

ModelsAbs RelLog10RMSEAcc.1Acc.2Acc.3
resnet180.1590.0680.6080.7720.9390.982
resnet500.1570.0670.5930.7800.9400.984

NYUv2 dataset (Rectifed Images by Unsupervised-Indoor-Depth)

ModelsAbs RelLog10RMSEAcc.1Acc.2Acc.3
resnet180.1430.0600.5380.8120.9510.986
resnet500.1420.0600.5290.8130.9520.987

Visual Odometry Results on KITTI odometry dataset

Network prediction (trained on 00-08)

MetricSeq. 09Seq. 10
t_err (%)7.317.79
r_err (degree/100m)3.054.90

Pseudo-RGBD SLAM output (Integration of SC-Depth in ORB-SLAM2)

MetricSeq. 09Seq. 10
t_err (%)5.084.32
r_err (degree/100m)1.052.34

Related projects