Home

Awesome

<div align="center">

Symphonies (Scene-from-Insts) 🎻

Symphonize 3D Semantic Scene Completion with Contextual Instance Queries

Haoyi Jiang <sup>1,✢</sup>, Tianheng Cheng <sup>1,✢</sup>, Naiyu Gao <sup>2</sup>, Haoyang Zhang <sup>2</sup>, Tianwei Lin <sup>2</sup>, Wenyu Liu <sup>1</sup>, Xinggang Wang <sup>1,✉️</sup> <br> <sup>1</sup> School of EIC, HUST, <sup>2</sup> Horizon Robotics

CVPR 2024

</div>

arXiv License: MIT

PWC

PWC

TL;DR: Our paper delve into enhancing SSC through the utilization of instance-centric representations. We propose a novel paradigm that integrates instance queries to facilitate instance semantics and capture global context. Our approach achieves SOTA results of 15.04 & 18.58 mIoU on the SemanticKITTI & KITTI-360, respectively.

This project is built upon TmPL, a template for rapid & flexible DL experimentation development built upon Lightning & Hydra.

arch

vis

vis

News

Preliminary

Installation

  1. Install PyTorch and Torchvision referring to https://pytorch.org/get-started/locally/.

  2. Install MMDetection referring to https://mmdetection.readthedocs.io/en/latest/get_started.html#installation.

  3. Install the rest of the requirements with pip.

    pip install -r requirements.txt
    

Dataset Preparation

1. Download the Data

SemanticKITTI: Download the RGB images, calibration files, and preprocess the labels, referring to the documentation of VoxFormer or MonoScene.

SSCBench-KITTI-360: Refer to https://github.com/ai4ce/SSCBench/tree/main/dataset/KITTI-360.

2. Generate Depth Predications

SemanticKITTI: Generate depth predications with pre-trained MobileStereoNet referring to VoxFormer https://github.com/NVlabs/VoxFormer/tree/main/preprocess#3-image-to-depth.

SSCBench-KITTI-360: Follow the same procedure as SemanticKITTI but ensure to adapt the disparity value.

Pretrained Weights

The pretrained weight of MaskDINO can be downloaded here.

Usage

  1. Setup

    export PYTHONPATH=`pwd`:$PYTHONPATH
    
  2. Training

    python tools/train.py [--config-name config[.yaml]] [trainer.devices=4] \
        [+data_root=$DATA_ROOT] [+label_root=$LABEL_ROOT] [+depth_root=$DEPTH_ROOT]
    
  3. Testing

    Generate the outputs for submission on the evaluation server:

    python tools/test.py [+ckpt_path=...]
    
  4. Visualization

    1. Generating outputs

      python tools/generate_outputs.py [+ckpt_path=...]
      
    2. Visualization

      python tools/visualize.py [+path=...]
      

Results

  1. SemanticKITTI

    MethodSplitIoUmIoUDownload
    Symphoniesval41.9214.89log / model
    Symphoniestest42.1915.04output
  2. KITTI-360

    MethodSplitIoUmIoUDownload
    Symphoniestest44.1218.58log / model

Citation

If you find our paper and code useful for your research, please consider giving this repo a star :star: or citing :pencil::

@article{jiang2023symphonies,
      title={Symphonize 3D Semantic Scene Completion with Contextual Instance Queries},
      author={Haoyi Jiang and Tianheng Cheng and Naiyu Gao and Haoyang Zhang and Tianwei Lin and Wenyu Liu and Xinggang Wang},
      journal={CVPR},
      year={2024}
}

Acknowledgements

The development of this project is inspired and informed by MonoScene, MaskDINO and VoxFormer. We are thankful to build upon the pioneering work of these projects.

License

Released under the MIT License.