Home

Awesome

Mirror3D: Depth Refinement for Mirror Surfaces

Jiaqi Tan, Weijie Lin, Angel X. Chang , Manolis Savva

Preparation for all implementations

mkdir workspace && cd workspace

### Put data under dataset folder
mkdir dataset

### Clone this repo and pull all submodules
git clone --recursive https://github.com/3dlg-hcvc/mirror3d.git

Environment Setup

### Install packages 
cd mirror3d && pip install -e .

### Setup Detectron2
python -m pip install git+https://github.com/facebookresearch/detectron2.git

Dataset

Please refer to Mirror3D Dataset for instructions on how to prepare mirror data. Please visit our project website for updates and to browse more data.

<table width="80%" border="0" > <tr> <th> Matterport3D </th> <th> ScanNet </th> <th> NYUv2 </th> </tr> <tr> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/mp3d-data.png" /> </td> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/scannet-data.png" /> </td> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/nyu-data.png" /> </td> </tr> <tr color="white"> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/mp3d-data.gif" /> </td> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/scannet-data.gif" /> </td> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/nyu-data.gif" /> </td> </tr> </table>

Mirror annotation tool

Please refer to User Instruction for instructions on how to annotate mirror data.

Models

Mirror3DNet PyTorch Implementation

Mirror3DNet architecture can be used for either an RGB image or an RGBD image input. For an RGB input, we refine the depth of the predicted depth map D<sub>pred</sub> output by a depth estimation module. For RGBD input, we refine a noisy input depth D<sub>noisy</sub>.

<p align="center"> <img src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/network-arch-cr-new.jpg"> </p>

Please check Mirror3DNet for our network's pytorch implementation.

Initial Depth Generator Implementation

We test three methods on our dataset:

We updated the dataloader and the main train/test script in the original repository to support our input format.

Network input

Our network inputs are JSON files stored based on coco annotation format. Please download network input json to train and test our models.

Training

Please remember to prepare the mirror data according to Mirror3D Dataset before training and inference.

To train our models please run:

cd workspace

### Download network input json
wget http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/mirror3d_input.zip
unzip mirror3d_input.zip

### Get R-50.pkl from detectron2 to train Mirror3DNet and PlaneRCNN
mkdir checkpoint && cd checkpoint
wget https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-50.pkl

cd ../mirror3d

### Train on NYUv2 mirror data
bash script/nyu_train.sh

### Train on Matterport3D mirror data
bash script/mp3d_train.sh

By default, we put the unzipped data and network input packages under ../dataset. Please change the relevant configuration if you store the data in different directories. Output checkpoints and tensorboard log files are saved under --log_directory.

Inference

### Download all pretrained checkpoints
cd workspace
wget http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/checkpoint.zip
unzip checkpoint.zip

### Download network input json
wget http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/mirror3d_input.zip
unzip mirror3d_input.zip
cd mirror3d

### Inference on NYUv2 mirror data
bash script/nyu_infer.sh

### Inference on Matterport3D mirror data
bash script/mp3d_infer.sh

Output depth maps are saved under a folder named pred_depth. Optional: If you want to view all inference results on an html webpage, please run all steps in mirror3d/visualization/result_visualization.py.

Pretrained checkpoint

Individual checkpoint included in the checkpoint.zip above. Please use wget command to download to the .zip file if there's no response clicking the link.

Source DatasetInputTrainMethodModel Download
NYUv2RGBDraw sensor depthsaicsaic_rawD.zip
NYUv2RGBDrefined sensor depthsaicsaic_refD.zip
NYUv2RGBraw sensor depthBTSbts_nyu_v2_pytorch_densenet161.zip
NYUv2RGBrefined sensor depthBTSbts_refD.zip
NYUv2RGBraw sensor depthVNLnyu_rawdata.pth
NYUv2RGBrefined sensor depthVNLvnl_refD.zip
Matterport3DRGBDraw mesh depthMirror3DNetmirror3dnet_rawD.zip
Matterport3DRGBDrefined mesh depthMirror3DNetmirror3dnet_refD.zip
Matterport3DRGBDraw mesh depthPlaneRCNNplanercnn_rawD.zip
Matterport3DRGBDrefined mesh depthPlaneRCNNplanercnn_refD.zip
Matterport3DRGBDraw mesh depthsaicsaic_rawD.zip
Matterport3DRGBDrefined mesh depthsaicsaic_refD.zip
Matterport3DRGB*Mirror3DNetmirror3dnet.zip
Matterport3DRGBraw mesh depthBTSbts_rawD.zip
Matterport3DRGBrefined mesh depthBTSbts_refD.zip
Matterport3DRGBraw mesh depthVNLvnl_rawD.zip
Matterport3DRGBrefined mesh depthVNLvnl_refD.zip