Home

Awesome

Monodepth2

This repository contains the implementation of the methods described in

Improving Self-Supervised Single View Depth Estimation by Masking Occlusion

We introduce an occlusion mask, that during training specifically ignores regions that cannot be reconstructed due to occlusions and prevents them from being used in the supervisory signal.

The implementation is a modified version of Monodepth2.

<p align="center"> <img src="assets/target.jpg" alt="reconstruction target" width="600" /> </p> <p align="center"> <img src="assets/reconstruction.jpg" alt="reconstruction" width="600" /> </p> <p align="center"> <img src="assets/occlusion_mask.jpg" alt="occlusion mask" width="600" /> </p>

⚙ Setup

You can refer to the README from the original project to setup the required environment.

🖼️ Prediction for a single image

You can predict depth for a single image with:

python test_simple.py --image_path assets/test_image.jpg --model_name non_occluded_min_640x192

On its first run this will download the non_occluded_min_640x192 pretrained model (99MB) into the models/ folder. We provide the following options for --model_name:

--model_namePhotometric lossOcclusion maskModel resolutionKITTI abs. rel. errordelta < 1.25
Average reprojectionNo640 x 1920.1170.870
non_occluded_avg_640x192Non-occluded avg. reprojectionYes640 x 1920.1170.874
mono_640x192Per-pixel min. reprojectionNo640 x 1920.1150.877
non_occluded_min_640x192Non-occluded min. reprojectionYes640 x 1920.1130.878