Home

Awesome

Image Processing for Basic Depth Completion (IP-Basic)

Depth completion is the task of converting a sparse depth map D<sub>sparse</sub> into a dense depth map D<sub>dense</sub>. This algorithm was originally created to help visualize 3D object detection results for AVOD.

An accurate dense depth map can also benefit 3D object detection or SLAM algorithms that use point cloud input. This method uses an unguided approach (images are ignored, only LIDAR projections are used). Basic depth completion is done with OpenCV and NumPy operations in Python. For more information, please see our paper: In Defense of Classical Image Processing: Fast Depth Completion on the CPU.

Please visit https://github.com/kujason/scene_vis for 3D point cloud visualization demos on raw KITTI data.

If you use this code, we would appreciate if you cite our paper: In Defense of Classical Image Processing: Fast Depth Completion on the CPU

@inproceedings{ku2018defense,
  title={In Defense of Classical Image Processing: Fast Depth Completion on the CPU},
  author={Ku, Jason and Harakeh, Ali and Waslander, Steven L},
  booktitle={2018 15th Conference on Computer and Robot Vision (CRV)},
  pages={16--22},
  year={2018},
  organization={IEEE}
}

Videos

Click here for a short demo video with comparison of different versions.

Demo Video

Click here to see point clouds from additional KITTI raw sequences. Note that the structure of smaller or thin objects (e.g. poles, bicycle wheels, pedestrians) are well preserved after depth completion.

Extra Scenes

Also see an earlier version of the algorithm in action here (2 top views).


Setup

Tested on Ubuntu 16.04 with Python 3.5.

  1. Download and unzip the KITTI depth completion benchmark dataset into ~/Kitti/depth (only the val_selection_cropped and test data sets are required to run the demo). The folder should look like something the following:
  1. Clone this repo and install Python requirements:
git clone git@github.com:kujason/ip_basic.git
cd ip_basic
pip3 install -r requirements.txt
  1. Run the script:
python3 demos/depth_completion.py

This will run the algorithm on the cropped validation set and save the outputs to a new folder in demos/outputs. Refer to the readme in the downloaded devkit to evaluate the results.

  1. (Optional) Set options in depth_completion.py
  1. (Optional) To run the algorithm on a specific CPU (e.g. CPU 0):
taskset --cpu-list 0 python3 demos/depth_completion.py

Results

KITTI Test Set Evaluation

MethodiRMSEiMAERMSEMAEDeviceRuntimeFPS
NadarayaW6.341.841852.60416.77CPU (1 core)0.05 s20
SparseConvs4.941.781601.33481.27GPU0.01 s100
NN+CNN3.251.291419.75416.14GPU0.02 s50
IP-Basic3.751.291288.46302.60CPU (1 core)0.011 s90

Table: Comparison of results with other published unguided methods on the KITTI Depth Completion benchmark.

Versions

Several versions are provided for experimentation.

Timing Comparisons

The table below shows a comparison of timing on an Intel Core i7-7700K for different versions. The Gaussian versions can be run on a single core, while other versions run faster with multiple cores. The bilateral blur version with no extrapolation is recommended for practical applications.

VersionRuntimeFPS
Gaussian (Paper Result, Lowest RMSE)0.0111 s90
Bilateral0.0139 s71
Gaussian, No Extrapolation0.0075 s133
Bilateral, No Extrapolation (Lowest MAE)0.0115 s87
Multi-Scale, Bilateral, Noise Removal, No Extrapolation0.0328 s30

Table: Timing comparison for different versions.

Examples

Qualitative results from the Multi-Scale, Bilateral, Noise Removal, No Extrapolation version on samples from the KITTI object detection benchmark.

Cars

People