Home

Awesome

VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation

Framework Fig

Created by Zeyu HU

Introduction

This work is based on our paper VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation, which appears at the IEEE International Conference on Computer Vision (ICCV) 2021. Update: The TPAMI (ICCV 2021 SI) version has been released.

In recent years, sparse voxel-based methods have become the state-of-the-arts for 3D semantic segmentation of indoor scenes, thanks to the powerful 3D CNNs. Nevertheless, being oblivious to the underlying geometry, voxel-based methods suffer from ambiguous features on spatially close objects and struggle with handling complex and irregular geometries due to the lack of geodesic information. In view of this, we present Voxel-Mesh Network (VMNet), a novel 3D deep architecture that operates on the voxel and mesh representations leveraging both the Euclidean and geodesic information. Intuitively, the Euclidean information extracted from voxels can offer contextual cues representing interactions between nearby objects, while the geodesic information extracted from meshes can help separate objects that are spatially close but have disconnected surfaces. To incorporate such information from the two domains, we design an intra-domain attentive module for effective feature aggregation and an inter-domain attentive module for adaptive feature fusion. Experimental results validate the effectiveness of VMNet: specifically, on the challenging ScanNet dataset for large-scale segmentation of indoor scenes, it outperforms the state-of-the-art SparseConvNet and MinkowskiNet (74.6% vs 72.5% and 73.6% in mIoU) with a simpler network structure (17M vs 30M and 38M parameters).

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{hu2021vmnet,
  title={VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation},
  author={Hu, Zeyu and Bai, Xuyang and Shang, Jiaxiang and Zhang, Runze and Dong, Jiayu and Wang, Xin and Sun, Guangyuan and Fu, Hongbo and Tai, Chiew-Lan},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2021}
}

Installation

Data Preparation

Train

Inference

Acknowledgements

Our code is built upon <a href="https://github.com/rusty1s/pytorch_geometric">torch-geometric</a>, <a href="https://github.com/mit-han-lab/torchsparse">torchsparse</a> and <a href="https://github.com/VisualComputingInstitute/dcm-net">dcm-net</a>.

License

Our code is released under MIT License (see LICENSE file for details).