Home

Awesome

Open-Vocabulary Occupancy

This repo is the official implementation of OVO: Open-Vocabulary Occupancy

OVO: Open-Vocabulary Occupancy

Zhiyu Tan*, Zichao Dong*, Cheng Zhang, Weikun Zhang, Hang Ji, Hao Li $\dagger$

Introduction

Semantic occupancy prediction aims to infer dense geometry and semantics of surroundings for an autonomous agent to operate safely in the 3D environment. Existing occupancy prediction methods are almost entirely trained on human-annotated volumetric data. Although of high quality, the generation of such 3D annotations is laborious and costly, restricting them to a few specific object categories in the training dataset.

We propose Open Vocabulary Occupancy (OVO), a novel approach that allows semantic occupancy prediction of arbitrary classes but without the need for 3D annotations during training. Keys to our approach are (1) knowledge distillation from a pre-trained 2D open-vocabulary segmentation model to the 3D occupancy network, and (2) pixel-voxel filtering for high-quality training data generation. The resulting framework is simple, compact, and compatible with most state-of-the-art semantic occupancy prediction models.

Preparing OVO

Installation

  1. Create conda environment:

    $ conda create -y -n ovo python=3.7
    $ conda activate ovo
    
  2. Install pytorch:

    $ conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch
    
  3. Install the additional dependencies:

    $ pip install -r requirements.txt
    
  4. Install tbb:

    $ conda install -c bioconda tbb=2020.2
    
  5. Downgrade torchmetrics to 0.6.0:

    $ pip install torchmetrics==0.6.0
    
  6. Finally, install OVO:

    $ pip install -e .
    

Data Preprocess

  1. Generate LSeg embedding.

    Refer to LSeg.

  2. Generate prompt embedding.

    Refer to CLIP and get_prompt_embedding.py.

    Or directly use prompt_embeddings offered in this repository.

  3. Label preprocess.

    NYUv2 ov labels (Used for training):

    Change seg_class_map in ovo/data/NYU/preprocess_ov.py

    In this repository we offer an example of merging 'bed', 'table' and ' other' into 'other'.

    python ovo/data/NYU/preprocess_ov.py NYU_root=/path/to/NYU_dataset/depthbin/ NYU_preprocess_root=/path/to/nyu_preprocess_ov
    

    SemanticKITTI ov labels (Used for training):

    Change learning_map_inv in ovo/data/semantic_kitti/semantic-kitti.yaml

    In this repository we offer an example of merging 'car', 'road' and ' building' into 'road'.

    python ovo/data/semantic_kitti/preprocess_ov.py kitti_root=/path/to/kitti_dataset/ kitti_preprocess_root=/path/to/kitti_preprocess_ov
    

    NYUv2 ori labels (Used for inference):

    python ovo/data/NYU/preprocess_ori.py NYU_root=/path/to/NYU_dataset/depthbin/ NYU_preprocess_root=/path/to/nyu_preprocess_ori
    

    SemanticKITTI ori labels (Used for inference):

    python ovo/data/semantic_kitti/preprocess_ov.py kitti_root=/path/to/kitti_dataset/ kitti_preprocess_root=/path/to/kitti_preprocess_ori
    
  4. Occlusion preprocess.

    python ovo/occlusion_preprocess/find_occ_pairs_kitti.py /path/to/kitti_preprocess_ov
    
    python ovo/occlusion_preprocess/find_occ_pairs_nyu.py /path/to/nyu_preprocess_ov/base/NYUtrain/
    
  5. Voxel selection.

    Filling the path parameters in ovo/data/NYU/nyu_valid_pairs.py

    python ovo/data/NYU/nyu_valid_pairs.py
    

    Filling the path parameters in ovo/data/NYU/nyu_valid_pairs.py

    python ovo/data/semantic_kitti/kitti_valid_pairs.py
    
  6. Integrate all pre-processed data.

    Filling the path parameters in ovo/data/NYU/prepare_total.py

    python ovo/data/NYU/prepare_total.py
    

    Filling the path parameters in ovo/data/semantic_kitti/prepare_total.py

    python ovo/data/semantic_kitti/prepare_total.py
    

Training OVO

NYUv2

# train_nyu.sh
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python ./ovo/scripts/train_ovo.py \
dataset=NYU \
NYU_root=/path/to/NYU_dataset/depthbin/ \
NYU_preprocess_root=/path/to/nyu_preprocess_ov \
NYU_prepare_total=/path/to/nyu_preprocess_total \
logdir=./outputs \
n_gpus=8 batch_size=8

SemanticKITTI

# train_kitti.sh
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python ./ovo/scripts/train_ovo.py \
dataset=kitti \
kitti_root=/path/to/kitti_dataset/ \
kitti_preprocess_root=/path/to/kitti_preprocess_ov/ \
kitti_prepare_total=/path/to/kitti_preprocess_total \
logdir=./outputs \
n_gpus=8 batch_size=8

Inference

NYUv2

# infer_nyu.sh
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python ovo/scripts/infer_ovo.py \
dataset=NYU \
NYU_root=/path/to/NYU_dataset/depthbin/ \
NYU_preprocess_root=/path/to/nyu_preprocess_ori \
+word_path=ovo/prompt_embedding/nyu_prompt_embedding.json \
+model_path=/path/to/model_file/last.ckpt \
+output_path=/data/visualization_file/ \
+novel_class_lbl=[6,8,11] \
+target_lbl=11 \
n_gpus=1 batch_size=1 \
vis=True miou=True

SemanticKITTI

# infer_kitti.sh
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python ovo/scripts/infer_ovo.py \
dataset=kitti \
kitti_root=/path/to/kitti_dataset/ \
kitti_preprocess_root=/path/to/kitti_preprocess_ori/ \
+word_path=ovo/prompt_embedding/kitti_prompt_embedding.json \
+model_path=/path/to/model_file/last.ckpt \
+output_path=/path/to/visualization_file/ \
+novel_class_lbl=[1,9,13] \
+target_lbl=9 \
n_gpus=1 batch_size=1 \
vis=True miou=True

Visualization

Refer to MonoScene visualization

Filling the path parameters in ovo/scripts/visualization/nyu_vis.py

python ovo/scripts/visualization/nyu_vis.py

Filling the path parameters in ovo/scripts/visualization/kitti_vis.py

python ovo/scripts/visualization/kitti_vis.py

Main results

NYUv2

MethodInputbedtableothermeanceilingfloorwallwindowchairsofatvsfurnituremean
Fully-supervised
AICNetC, D35.8711.116.4517.817.5882.979.150.056.9322.920.7115.9018.28
SSCNetC, D32.1013.0010.1018.4015.1094.7024.400.0012.6035.07.8027.1027.10
3DSketchC42.2913.888.1921.458.5390.459.945.6710.6429.219.3823.8323.46
MonoSceneC48.1915.1312.9425.428.8993.5012.0612.5713.7236.1115.2227.9627.50
Zero-shot
MonoScene*C--------8.1093.499.9410.3213.2434.4711.7526.4125.96
oursC41.6110.458.3920.157.7793.167.776.9510.0133.838.2225.6424.17

SemanticKITTI

MethodInputcarroadbuildingmeansidewalkparkingother groundtruckbicyclemotorcycleother vehiclevegetationtrunkterrainpersonbicyclistmotorcyclistfencepoletraffic signmean
Fully-supervised
AICNetC, D15.339.39.621.418.319.81.60.70.00.00.09.61.913.50.00.00.05.00.10.04.4
3DSketchC $\dagger$17.137.712.122.319.80.00.00.00.00.00.012.10.016.10.00.00.03.40.00.03.2
MonoSceneC18.854.714.429.327.124.85.73.30.50.74.414.92.419.51.01.40.411.13.32.17.7
TPVFormerC×623.856.513.931.425.920.60.98.10.40.14.416.92.330.40.50.90.05.93.11.57.6
Zero-shot
oursC13.353.99.725.726.514.40.10.70.40.32.517.22.329.00.60.70.05.43.01.76.6

Related projects

Our code is based on MonoScene. Many thanks to the authors for their great work.

Citation

If you find this project helpful, please consider citing the following paper:

@misc{tan2023ovo,
      title={OVO: Open-Vocabulary Occupancy}, 
      author={Zhiyu Tan and Zichao Dong and Cheng Zhang and Weikun Zhang and Hang Ji and Hao Li},
      year={2023},
      eprint={2305.16133},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}