Home

Awesome

PointSAM-for-MixSup

MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection (ICLR 2024)

Yuxue Yang, Lue Fan†, Zhaoxiang Zhang† (†: Corresponding Authors)

[ :bookmark_tabs: Paper ] [ :octocat: GitHub Repo ] [ :paperclip: BibTeX ]

Teaser Figure

A good LiDAR-based detector needs massive semantic labels for difficult semantic learning but only a few accurate labels for geometry estimation.

<p float="left"> <img src="assets/PointSAM.png", width="53%"> <img src="assets/SAR.png", width="46%"> </p>

:raising_hand: Talk is cheap, show me the samples!

nuScenes Sample Token1ac0914c98b8488cb3521efeba354496fd8420396768425eabec9bdddf7e64b6
PointSAMQualitative ResultsQualitative Results
Ground TruthQualitative ResultsQualitative Results

:star2: Panoptic segmentation performance for thing classes on nuScenes validation split

MethodsPQ<sup>Th</sup>SQ<sup>Th</sup>RQ<sup>Th</sup>
GP-S3Net56.085.365.2
SMAC-Seg65.287.174.2
Panoptic-PolarNet59.284.170.3
SCAN60.685.770.2
PointSAM (Ours)63.782.676.9

Installation

PointSAM

Step 1. Create a conda environment and activate it.

conda create --name MixSup python=3.8 -y
conda activate MixSup

Step 2. Install PyTorch following official instructions. The codes are tested on PyTorch 1.9.1, CUDA 11.1.

pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html

Step 3. Install Segment Anything and torch_scatter.

pip install git+https://github.com/facebookresearch/segment-anything.git
pip install https://data.pyg.org/whl/torch-1.9.0%2Bcu111/torch_scatter-2.0.9-cp38-cp38-linux_x86_64.whl

Step 4. Install other dependencies.

pip install -r requirements.txt

Dataset Preparation

nuScenes

Download nuScenes Full dataset and nuScenes-panoptic (for evaluation) from the official website, then extract and organize the data ito the following structure:

PointSAM-for-MixSup
└── data
    └── nuscenes
        ├── maps
        ├── panoptic
        ├── samples
        ├── sweeps
        └── v1.0-trainval

Note: v1.0-trainval/category.json and v1.0-trainval/panoptic.json in nuScenes-panoptic will replace the original v1.0-trainval/category.json and v1.0-trainval/panoptic.json of the Full dataset.

Getting Started

First download the model checkpoints, then run the following commands to reproduce the results in the paper:

# single-gpu
bash run.sh

# multi-gpu
bash run_dist.sh

Note:

  1. The default setting for run_dist.sh is to use 8 GPUs. If you want to use less GPUs, please modify the NUM_GPUS argument in run_dist.sh.
  2. You can specify the SAMPLE_INDICES between scripts/indices_train.npy and scripts/indices_val.npy to run PointSAM on train or val split of nuScenes. The default setting is to segment the val split and evaluate the results on panoptic segmentation task.
  3. Before running the scripts, please make sure that you have at least 850MB of free space in the OUT_DIR folder for val split and 4GB for train split.
  4. segment3D.py is the main script for PointSAM. The argument --for_eval is used to generate labels with the same format as nuScenes-panoptic for evaluation, which is not necessary for MixSup. If you just want to utilize PointSAM for MixSup, please remove --for_eval in run.sh or run_dist.sh. We also provide a script to convert the labels generated by PointSAM between the .npz format for nuScenes-panoptic evaluation and .bin format for MixSup.

Model Checkpoints

We adopt ViT-H SAM as the segmentation model for PointSAM and utilize nuImages pre-trained HTC to integrate semantics for instance masks.

Click the following links to download the model checkpoints and put them in the ckpt/ folder to be consistent with the configuration in configs/cfg_PointSAM.py.

TODO

Citation

Please consider citing our work as follows if it is helpful.

@inproceedings{yang2024mixsup,
    title={MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection},
    author={Yang, Yuxue and Fan, Lue and Zhang, Zhaoxiang},
    booktitle={ICLR},
    year={2024},
}

Acknowledgement

This project is based on the following repositories.