Home

Awesome

Disentangled Instance Mesh Reconstruction (ECCV 2022)

This repository contains the official implementation for the paper: Point Scene Understanding via Disentangled Instance Mesh Reconstruction.

Project Page | Arxiv | Data

dimr

Installation

Clone the repository:

git clone --recursive  https://github.com/ashawkey/dimr
cd dimr

pip install -r requirements.txt

Install dependent libraries:

The repository depends on a modified spconv from pointgroup for sparse convolution, which requires CUDA version < 11 and pytorch < 1.5.

Data Preparation

Full folder structure

.
├──datasets
│   ├── scannet
│   │   ├── scans # scannet scans
│   │   │   ├── scene0000_00 # only these 4 files are used.
│   │   │   │   ├── scene0000_00.txt
│   │   │   │   ├── scene0000_00_vh_clean_2.ply
│   │   │   │   ├── scene0000_00.aggregation.json
│   │   │   │   ├── scene0000_00_vh_clean_2.0.010000.segs.json
│   │   │   ├── ......
│   │   │   ├── scene0706_00
│   │   ├── scan2cad # scan2cad, only the following 1 file is used.
│   │   │   ├── full_annotations.json
│   │   ├── scannetv2-labels-combined.tsv # scannet label mappings
│   │   ├── processed_data # preprocessed data
│   │   │   ├── scene0000_00 
│   │   │   │   ├── bbox.pkl
│   │   │   │   ├── data.npz
│   │   │   ├── ......
│   │   │   ├── scene0706_00
│   │   ├── rfs_label_map.csv # generated label mappings
│   ├── ShapeNetCore.v2 # shapenet core v2 dataset
│   │   ├── 02954340
│   │   ├── ......
│   │   ├── 04554684
│   ├── ShapeNetv2_data # preprocessed shapenet dataset
│   │   ├── watertight_scaled_simplified
│   ├── bsp # the pretrained bsp model
│   │   ├── zs
│   │   ├── database_scannet.npz
│   │   ├── model.pth
│   ├── splits # data splits
│   │   ├── train.txt
│   │   ├── val.txt
│   │   ├── test.txt

Prepare the data

Training

# train phase 1 (point-wise)
python train.py --config config/rfs_phase1_scannet.yaml

# train phase 2 (proposal-wise)
python train.py --config config/rfs_phase2_scannet.yaml

Please check the config files for more options.

Testing

Generate completed instance meshes:

# test after training phase 2
python test.py --config config/rfs_phase2_scannet.yaml
# example path for the meshes: ./exp/scannetv2/rfs/rfs_phase2_scannet/result/epoch256_nmst0.3_scoret0.05_npointt100/val/trimeshes/

# test with a speficied checkpoint
python test.py --config config/rfs_pretrained_scannet.yaml --pretrain ./checkpoint.pth

We provide the pretrained model here.

To visualize the intermediate point-wise results:

python util/visualize.py --task semantic_gt --room_name all
python util/visualize.py --task instance_gt --room_name all
# after running test.py, may need to change `--result_root` to the output directory, check the script for more details.
python util/visualize.py --task semantic_pred --room_name all
python util/visualize.py --task instance_pred --room_name all

Evaluation

We provide 4 metrics for evaulation the instance mesh reconstruction quality. For the IoU evaluation, we rely on binvox to voxelize meshes (via trimesh's API), so make sure it can be found in the system path.

## first, prepare GT instance meshes 
python data/scannetv2_inst.py # prepare at "./datasets/gt_meshes"

## assume the generated meshes are under "./pred_meshes"
# IoU
python evaluation/iou/eval.py ./datasets/gt_meshes ./pred_meshes

# CD
python evaluation/cd/eval.py ./datasets/gt_meshes ./pred_meshes

# LFD
python evaluation/lfd/eval.py ./datasets/gt_meshes ./pred_meshes

# PCR
python evaluation/pcr/eval.py ./pred_meshes

We provide the meshes used in our evaluation for reproduction here, it includes the output meshes from RfD-Net, Ours, Ours-projection, and Ours-retrieval.

The GT meshes can also be found here.

Citation

If you find our work useful, please use the following BibTeX entry:

@article{tang2022point,
  title={Point Scene Understanding via Disentangled Instance Mesh Reconstruction},
  author={Tang, Jiaxiang and Chen, Xiaokang and Wang, Jingbo and Zeng, Gang},
  journal={arXiv preprint arXiv:2203.16832},
  year={2022}
}

Acknowledgement

We would like to thank RfD-Net and pointgroup authors for open-sourcing their great work!