Home

Awesome

Memory-based Adapters for Online 3D Scene Perception

Introduction

This repo contains PyTorch implementation for paper Memory-based Adapters for Online 3D Scene Perception based on MMDetection3D. Look here for 中文解读.

Memory-based Adapters for Online 3D Scene Perception
Xiuwei Xu*, Chong Xia*, Ziwei Wang, Linqing Zhao, Yueqi Duan, Jie Zhou, Jiwen Lu

teaser

We propose a model and task-agnostic plug-and-play module, which converts offline 3D scene perception models (receive reconstructed point clouds) to online perception models (receive streaming RGB-D videos).

News

Method

Overall pipeline of our work:

overview

Memory-based adapters can be easily inserted into existing architecture by a few lines in config:

model = dict(
    type='SingleViewModel',
    img_memory=dict(type='MultilevelImgMemory', ...),
    memory=dict(type='MultilevelMemory', ...),
    ...)

Getting Started

For data preparation and environment setup:

For training,evaluation and visualization:

Main Results

We provide the checkpoints for quick reproduction of the results reported in the paper.

3D semantic segmentation on ScanNet and SceneNN:

MethodTypeDatasetmIoumAccDownloads
MkNetOfflineScanNet71.680.4-
MkNet-SVOnlineScanNet68.877.7model
MkNet-SV + OursOnlineScanNet72.784.1model
MkNet-SVOnlineSceneNN48.461.2model
MkNet-SV + OursOnlineSceneNN56.770.1model

3D object detection on ScanNet:

MethodTypemAP@25mAP@50Downloads
FCAF3DOffline70.756.0-
FCAF3D-SVOnline41.920.6model
FCAF3D-SV + OursOnline70.549.9model

3D instance segmentation on ScanNet:

MethodTypemAP@25mAP@50Downloads
TD3DOffline81.371.1-
TD3D-SVOnline53.736.8model
TD3D-SV + OursOnline71.360.5model
<!-- Here is the performance of different 3D scene perception methods on ScanNet online benchmark. We report mIoU / mAcc, mAP@25 / mAP@50 and mAP@25 / mAP@50 for semantic segmentation, object detection and instance segmentation respectively. And NS means the number of sequence, while LS means the length of Sequence. Task | Method | Type | NS 1 | NS 5 | NS 10| LS 5 | LS 10 | LS 15 | :----: | :----: | :----: | :----: |:----: |:----: |:----: |:----: |:----: | Semseg | MkNet | Offline | 63.7/73.5 | 62.7/72.8 | 58.9/69.4|59.3/69.8|63.0/73.0|63.5/73.7 Semseg | MkNet-SV | Online | 63.3/74.3 | 63.3/74.3 | 63.3/74.3 |63.3/74.3 |63.3/74.3 |63.3/74.3 Semseg | MkNet-SV + Ours | Online | 69.1/82.2 | 66.8/80.0 | 65.9/79.2|65.9/79.3|66.8/80.1|67.1/80.4 Detection | FCAF3D | Offline | 57.0/40.6 | 41.1/25.2 | 34.6/19.3|28.4/15.2|33.9/19.4|37.7/22.8 Detection | FCAF3D-SV | Online | 41.9/20.6 | 29.8/13.3 | 27.0/11.5|24.4/10.1|26.2/11.0|27.6/12.1 Detection | FCAF3D-SV + Ours | Online | 70.5/49.9 | 58.7/37.7 | 56.2/34.3|53.1/31.2|54.9/33.8|56.1/35.6 Insseg | TD3D | Offline | 64.0/50.8 | 61.6/49.7 | 59.4/48.4|59.0/47.9|61.4/49.8|61.7/49.8 Insseg | TD3D-SV | Online | 53.7/36.8 | 54.2/41.6 | 57.0/46.3|56.4/45.5|53.9/40.9|52.6/39.5 Insseg | TD3D-SV + Ours | Online | 71.3/60.5 | 64.7/55.2 | 64.2/55.0|64.0/54.7|64.6/55.1|63.9/54.3 -->

Visualization results:

vis

Tips

If your GPU resources are limited, consider:

img_memory=dict(type='MultilevelImgMemory', ada_layer=(0,1,2,3))
memory=dict(type='MultilevelMemory', vmp_layer=(0,1,2,3)),

        To:

img_memory=dict(type='MultilevelImgMemory', ada_layer=(2,3))
memory=dict(type='MultilevelMemory', vmp_layer=(2,3)),

        Then image and point cloud adapters will be only inserted after the highest two levels of features (for a four-level backbone).

Acknowledgement

We thank a lot for the flexible codebase of FCAF3D and valuable datasets provided by ScanNet and SceneNN.

Bibtex

If this work is helpful for your research, please consider citing the following BibTeX entry.

@article{xu2024online, 
      title={Memory-based Adapters for Online 3D Scene Perception}, 
      author={Xiuwei Xu and Chong Xia and Ziwei Wang and Linqing Zhao and Yueqi Duan and Jie Zhou and Jiwen Lu},
      journal={arXiv preprint arXiv:2403.06974},
      year={2024}
}