Home

Awesome

3D-STMN

NEWS:πŸ”₯3D-STMN is accepted at AAAI 2024!πŸ”₯

πŸ”₯This branch is for end-to-end training (about 31G of GPU RAM is needed). To save the GPU RAM by preprocessing features before training, please switch to the feat branch (only 7G of GPU RAM is needed for training).πŸ”₯

3D-STMN: Dependency-Driven Superpoint-Text Matching Network for End-to-End 3D Referring Expression Segmentation

Changli Wu, Yiwei Ma, Qi Chen, Haowei Wang, Gen Luo, Jiayi Ji*, Xiaoshuai Sun

<img src="docs\3D-STMN.png"/>

Introduction

​In 3D Referring Expression Segmentation (3D-RES), the earlier approach adopts a two-stage paradigm, extracting segmentation proposals and then matching them with referring expressions. However, this conventional paradigm encounters significant challenges, most notably in terms of the generation of lackluster initial proposals and a pronounced deceleration in inference speed. Recognizing these limitations, we introduce an innovative end-to-end Superpoint-Text Matching Network (3D-STMN) that is enriched by dependency-driven insights. One of the keystones of our model is the Superpoint-Text Matching (STM) mechanism. Unlike traditional methods that navigate through instance proposals, STM directly correlates linguistic indications with their respective superpoints, clusters of semantically related points. This architectural decision empowers our model to efficiently harness cross-modal semantic relationships, primarily leveraging densely annotated superpoint-text pairs, as opposed to the more sparse instance-text pairs. In pursuit of enhancing the role of text in guiding the segmentation process, we further incorporate the Dependency-Driven Interaction (DDI) module to deepen the network's semantic comprehension of referring expressions. Using the dependency trees as a beacon, this module discerns the intricate relationships between primary terms and their associated descriptors in expressions, thereby elevating both the localization and segmentation capacities of our model. Comprehensive experiments on the ScanRefer benchmark reveal that our model not only set new performance standards, registering an mIoU gain of 11.7 points but also achieve a staggering enhancement in inference speed, surpassing traditional methods by 95.7 times.

Installation

Requirements

The following installation suppose python=3.8 pytorch=1.12.1 and cuda=11.3.

Data Preparation

ScanNet v2 dataset

Download the ScanNet v2 dataset.

Put the downloaded scans folder as follows.

3D-STMN
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ scannetv2
β”‚   β”‚   β”œβ”€β”€ scans

Split and preprocess point cloud data

cd data/scannetv2
bash prepare_data.sh

The script data into train/val folder and preprocess the data. After running the script the scannet dataset structure should look like below.

3D-STMN
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ scannetv2
β”‚   β”‚   β”œβ”€β”€ scans
β”‚   β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”œβ”€β”€ val

ScanRefer dataset

Download ScanRefer annotations following the instructions.

Put the downloaded ScanRefer folder as follows.

3D-STMN
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ ScanRefer
β”‚   β”‚   β”œβ”€β”€ ScanRefer_filtered_train.json
β”‚   β”‚   β”œβ”€β”€ ScanRefer_filtered_val.json

Preprocess textual data

python data/features/save_graph.py --split train --data_root data/ --max_len 78
python data/features/save_graph.py --split val --data_root data/ --max_len 78

Pretrained Backbone

Download SPFormer pretrained model (We only use the Sparse 3D U-Net backbone for training).

Move the pretrained model to backbones.

mkdir backbones
mv ${Download_PATH}/sp_unet_backbone.pth backbones/

Training

For single GPU (32G):

bash scripts/train.sh

For multi-GPU (11G * 4 or 24G * 2):

bash scripts/train_multi_gpu.sh

Inference

Download 3D-STMN pretrain model and move it to checkpoints.

bash scripts/test.sh

Citation

If you find this work useful in your research, please cite:

@misc{2308.16632,
Author = {Changli Wu and Yiwei Ma and Qi Chen and Haowei Wang and Gen Luo and Jiayi Ji and Xiaoshuai Sun},
Title = {3D-STMN: Dependency-Driven Superpoint-Text Matching Network for End-to-End 3D Referring Expression Segmentation},
Year = {2023},
Eprint = {arXiv:2308.16632},
}

Ancknowledgement

Sincerely thanks for SoftGroup SSTNet and SPFormer repos. This repo is build upon them.