Awesome
Mirror3D: Depth Refinement for Mirror Surfaces
Jiaqi Tan, Weijie Lin, Angel X. Chang , Manolis Savva
Preparation for all implementations
mkdir workspace && cd workspace
### Put data under dataset folder
mkdir dataset
### Clone this repo and pull all submodules
git clone --recursive https://github.com/3dlg-hcvc/mirror3d.git
Environment Setup
- python 3.7.4
### Install packages
cd mirror3d && pip install -e .
### Setup Detectron2
python -m pip install git+https://github.com/facebookresearch/detectron2.git
Dataset
Please refer to Mirror3D Dataset for instructions on how to prepare mirror data. Please visit our project website for updates and to browse more data.
<table width="80%" border="0" > <tr> <th> Matterport3D </th> <th> ScanNet </th> <th> NYUv2 </th> </tr> <tr> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/mp3d-data.png" /> </td> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/scannet-data.png" /> </td> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/nyu-data.png" /> </td> </tr> <tr color="white"> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/mp3d-data.gif" /> </td> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/scannet-data.gif" /> </td> <td align="center" valign="center" style="width:30%;height: 250px;"> <img width=auto height="200" src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/nyu-data.gif" /> </td> </tr> </table>Mirror annotation tool
Please refer to User Instruction for instructions on how to annotate mirror data.
Models
Mirror3DNet PyTorch Implementation
Mirror3DNet architecture can be used for either an RGB image or an RGBD image input. For an RGB input, we refine the depth of the predicted depth map D<sub>pred</sub> output by a depth estimation module. For RGBD input, we refine a noisy input depth D<sub>noisy</sub>.
<p align="center"> <img src="http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/img/readme_img/network-arch-cr-new.jpg"> </p>Please check Mirror3DNet for our network's pytorch implementation.
Initial Depth Generator Implementation
We test three methods on our dataset:
- BTS: From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation
- VNL: Enforcing geometric constraints of virtual normal for depth prediction
- saic : Decoder Modulation for Indoor Depth Completion
We updated the dataloader and the main train/test script in the original repository to support our input format.
Network input
Our network inputs are JSON files stored based on coco annotation format. Please download network input json to train and test our models.
Training
Please remember to prepare the mirror data according to Mirror3D Dataset before training and inference.
To train our models please run:
cd workspace
### Download network input json
wget http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/mirror3d_input.zip
unzip mirror3d_input.zip
### Get R-50.pkl from detectron2 to train Mirror3DNet and PlaneRCNN
mkdir checkpoint && cd checkpoint
wget https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-50.pkl
cd ../mirror3d
### Train on NYUv2 mirror data
bash script/nyu_train.sh
### Train on Matterport3D mirror data
bash script/mp3d_train.sh
By default, we put the unzipped data and network input packages under ../dataset
. Please change the relevant configuration if you store the data in different directories. Output checkpoints and tensorboard log files are saved under --log_directory
.
Inference
### Download all pretrained checkpoints
cd workspace
wget http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/checkpoint.zip
unzip checkpoint.zip
### Download network input json
wget http://aspis.cmpt.sfu.ca/projects/mirrors/mirror3d_zip_release/mirror3d_input.zip
unzip mirror3d_input.zip
cd mirror3d
### Inference on NYUv2 mirror data
bash script/nyu_infer.sh
### Inference on Matterport3D mirror data
bash script/mp3d_infer.sh
Output depth maps are saved under a folder named pred_depth
. Optional: If you want to view all inference results on an html webpage, please run all steps in mirror3d/visualization/result_visualization.py.
Pretrained checkpoint
Individual checkpoint included in the checkpoint.zip
above. Please use wget
command to download to the .zip file if there's no response clicking the link.
Source Dataset | Input | Train | Method | Model Download |
---|---|---|---|---|
NYUv2 | RGBD | raw sensor depth | saic | saic_rawD.zip |
NYUv2 | RGBD | refined sensor depth | saic | saic_refD.zip |
NYUv2 | RGB | raw sensor depth | BTS | bts_nyu_v2_pytorch_densenet161.zip |
NYUv2 | RGB | refined sensor depth | BTS | bts_refD.zip |
NYUv2 | RGB | raw sensor depth | VNL | nyu_rawdata.pth |
NYUv2 | RGB | refined sensor depth | VNL | vnl_refD.zip |
Matterport3D | RGBD | raw mesh depth | Mirror3DNet | mirror3dnet_rawD.zip |
Matterport3D | RGBD | refined mesh depth | Mirror3DNet | mirror3dnet_refD.zip |
Matterport3D | RGBD | raw mesh depth | PlaneRCNN | planercnn_rawD.zip |
Matterport3D | RGBD | refined mesh depth | PlaneRCNN | planercnn_refD.zip |
Matterport3D | RGBD | raw mesh depth | saic | saic_rawD.zip |
Matterport3D | RGBD | refined mesh depth | saic | saic_refD.zip |
Matterport3D | RGB | * | Mirror3DNet | mirror3dnet.zip |
Matterport3D | RGB | raw mesh depth | BTS | bts_rawD.zip |
Matterport3D | RGB | refined mesh depth | BTS | bts_refD.zip |
Matterport3D | RGB | raw mesh depth | VNL | vnl_rawD.zip |
Matterport3D | RGB | refined mesh depth | VNL | vnl_refD.zip |