Awesome
<div align="center">Symphonies (Scene-from-Insts) 🎻
Symphonize 3D Semantic Scene Completion with Contextual Instance Queries
Haoyi Jiang <sup>1,✢</sup>, Tianheng Cheng <sup>1,✢</sup>, Naiyu Gao <sup>2</sup>, Haoyang Zhang <sup>2</sup>, Tianwei Lin <sup>2</sup>, Wenyu Liu <sup>1</sup>, Xinggang Wang <sup>1,✉️</sup> <br> <sup>1</sup> School of EIC, HUST, <sup>2</sup> Horizon Robotics
</div>TL;DR: Our paper delve into enhancing SSC through the utilization of instance-centric representations. We propose a novel paradigm that integrates instance queries to facilitate instance semantics and capture global context. Our approach achieves SOTA results of 15.04 & 18.58 mIoU on the SemanticKITTI & KITTI-360, respectively.
This project is built upon TmPL, a template for rapid & flexible DL experimentation development built upon Lightning & Hydra.
News
- Feb 27 '24: Our paper has been accepted at CVPR 2024. 🎉
- Nov 22 '23: We have updated our paper on arXiv with the latest results.
- Sep 18 '23: We have achieved state-of-the-art results on the recently published SSCBench-KITTI-360 benchmark.
- Jun 28 '23: We have released the arXiv paper of Symphonies.
Preliminary
Installation
-
Install PyTorch and Torchvision referring to https://pytorch.org/get-started/locally/.
-
Install MMDetection referring to https://mmdetection.readthedocs.io/en/latest/get_started.html#installation.
-
Install the rest of the requirements with pip.
pip install -r requirements.txt
Dataset Preparation
1. Download the Data
SemanticKITTI: Download the RGB images, calibration files, and preprocess the labels, referring to the documentation of VoxFormer or MonoScene.
SSCBench-KITTI-360: Refer to https://github.com/ai4ce/SSCBench/tree/main/dataset/KITTI-360.
2. Generate Depth Predications
SemanticKITTI: Generate depth predications with pre-trained MobileStereoNet referring to VoxFormer https://github.com/NVlabs/VoxFormer/tree/main/preprocess#3-image-to-depth.
SSCBench-KITTI-360: Follow the same procedure as SemanticKITTI but ensure to adapt the disparity value.
Pretrained Weights
The pretrained weight of MaskDINO can be downloaded here.
Usage
-
Setup
export PYTHONPATH=`pwd`:$PYTHONPATH
-
Training
python tools/train.py [--config-name config[.yaml]] [trainer.devices=4] \ [+data_root=$DATA_ROOT] [+label_root=$LABEL_ROOT] [+depth_root=$DEPTH_ROOT]
- Override the default config file with
--config-name
. - You can also override any value in the loaded config from the command line, refer to the following for more infomation.
- Override the default config file with
-
Testing
Generate the outputs for submission on the evaluation server:
python tools/test.py [+ckpt_path=...]
-
Visualization
-
Generating outputs
python tools/generate_outputs.py [+ckpt_path=...]
-
Visualization
python tools/visualize.py [+path=...]
-
Results
-
SemanticKITTI
Method Split IoU mIoU Download Symphonies val 41.92 14.89 log / model Symphonies test 42.19 15.04 output -
KITTI-360
Method Split IoU mIoU Download Symphonies test 44.12 18.58 log / model
Citation
If you find our paper and code useful for your research, please consider giving this repo a star :star: or citing :pencil::
@article{jiang2023symphonies,
title={Symphonize 3D Semantic Scene Completion with Contextual Instance Queries},
author={Haoyi Jiang and Tianheng Cheng and Naiyu Gao and Haoyang Zhang and Tianwei Lin and Wenyu Liu and Xinggang Wang},
journal={CVPR},
year={2024}
}
Acknowledgements
The development of this project is inspired and informed by MonoScene, MaskDINO and VoxFormer. We are thankful to build upon the pioneering work of these projects.
License
Released under the MIT License.