Home

Awesome

3DMV

3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 paper, 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation.

<img src="images/teaser.jpg">

Code

Installation:

Training is implemented with PyTorch. This code was developed under PyTorch 0.2 and recently upgraded to PyTorch 0.4.

Training:

python train.py --gpu 0 --train_data_list [path to list of train files] --data_path_2d [path to 2d image data] --class_weight_file [path to txt file of train histogram] --num_nearest_images 5 --model2d_path [path to pretrained 2d model]

Testing

python test.py --gpu 0 --scene_list test_scenes.txt --model_path models/scannetv2/scannet5_model.pth --data_path_2d [path to 2d image data] --data_path_3d [path to test scene data] --num_nearest_images 5 --model2d_orig_path models/scannetv2/scannet5_model2d.pth

Data:

This data has been precomputed from the ScanNet (v2) dataset.

Citation:

If you find our work useful in your research, please consider citing:

@inproceedings{dai20183dmv,
 author = {Dai, Angela and Nie{\ss}ner, Matthias},
 booktitle = {Proceedings of the European Conference on Computer Vision ({ECCV})},
 title = {3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation},
 year = {2018}
}

Contact:

If you have any questions, please email Angela Dai at adai@cs.stanford.edu.