Home

Awesome

msg_chn_wacv20

This repository contains the network configurations (in PyTorch) of our paper "A Multi-Scale Guided Cascade Hourglass Network for Depth Completion" by Ang Li, Zejian Yuan, Yonggen Ling, Wanchao Chi, Shenghao Zhang,and Chong Zhang.

Introduction

Depth completion, a task to estimate the dense depth map from sparse measurement under the guidance from the high-resolution image, is essential to many computer vision applications. Most previous methods building on fully convolutional networks can not handle diverse patterns in the depth map efficiently and effectively. We propose a multi-scale guided cascade hourglass network to tackle this problem. Structures at different levels are captured by specialized hourglasses in the cascade network with sparse inputs in various sizes. An encoder extracts multiscale features from color image to provide deep guidance for all the hourglasses. A multi-scale training strategy further activates the effect of cascade stages. With the role of each sub-module divided explicitly, we can implement components with simple architectures. Extensive experiments show that our lightweight model achieves competitive results compared with state-of-the-art in KITTI depth completion benchmark, with low complexity in run-time.

<p align="center"> <img src="./demo/video5.gif" alt="photo not available" height="50%"> </p>

Dependency

Network

The implementation of our network is in network.py. It takes the sparse depth and the rgb image (normalized to 0~1) as inputs, outputs the predictions from the last, the second, and the first sub-network in sequence. The output from the last network (output_d11) is used for the final test.

Inputs: input_d, input_rgb
Outputs: output_d11, output_d12, output_d14
         # outputs from the last, the second, and the first sub-network

※NOTE: We recently modify the architecture by adding the skip connections between the depth encoders and the depth decoders at the previous stage. This vision of network has 32 channels rather than 64 channels in our paper. The 32-channel network performs similarly to the 64-channel network in our paper on the test set, but has a much smaller number of parameters and a shorter run time. You can find more details in Results

Training and Evaluation

loss14 = L2Loss(output_d14, label)
loss12 = L2Loss(output_d12, label)
loss11 = L2Loss(output_d11, label)

if epoch < 6:
   loss = loss14 + loss12 + loss11
elif epoch < 11:
   loss = 0.1 * loss14 + 0.1 * loss12 + loss11
else:
   loss = loss11

Loss drops very little after 20 epochs. We trained 28 epoches to get the final model. More training configurations are given in params.json.

Results

The performance of our network is given in the table. We validate our model with both the validation dataset (val) and the selected depth data (val_selection_cropped ) in KITT dataset.

RMSEMAEiRMSEiMAE#Params
validation821.94227.942.470.98364K
selected validation817.08224.832.480.99364K
test783.49226.912.351.01364K

You can find our final model and the test results on KITTI data set from Google Drive or Baidu Web Drive with password mx84.

Citation

If you use our code or method in your work, please cite the following:

@inproceedings{li2020multi,
  title={A Multi-Scale Guided Cascade Hourglass Network for Depth Completion},
  author={Li, Ang and Yuan, Zejian and Ling, Yonggen and Chi, Wanchao and Zhang, Chong and others},
  booktitle={The IEEE Winter Conference on Applications of Computer Vision},
  pages={32--40},
  year={2020}
}

Please direct any questions to Ang Li at angli522@foxmail.com.