Home

Awesome

OneBEV: Using One Panoramic Image for Bird's-Eye-View Semantic Mapping (ACCV 2024 Oral)

<p> <a href="https://arxiv.org/pdf/2409.13912"> <img src="https://img.shields.io/badge/PDF-arXiv-brightgreen" /></a> <a href="https://pytorch.org/"> <img src="https://img.shields.io/badge/Framework-PyTorch-orange" /></a> <a href="https://github.com/open-mmlab/mmsegmentation"> <img src="https://img.shields.io/badge/Framework-mmsegmentation%201.x-yellowgreen" /></a> <a href="https://github.com/JialeWei/OneBEV/blob/main/LICENSE"> <img src="https://img.shields.io/badge/License-MIT-yellow.svg" /></a> </p>

OneBEV

Updates

Prerequisites

Please make sure your CUDA is >= 12.1.

Environments

conda create -n onebev python=3.10.13
conda activate onebev

#mmcv install
pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1/index.html

#torch install
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu121

#others
cd /path/to/onebev
pip install -r requirements.txt

#vmamba kernel install
cd onebev/models/backbones/kernels/selective_scan && pip install .

Datasets

Prepare datasets:

Our extended datasets:

Data statistics of OneBEV datasets:

DatasetSceneFrameCategory
train70028,1306
val1506,0196
NuScene-36085034,1496
train48340,61917
val1048,19317
DeepAccident-36058748,81217

The dataset folder structure is as follows:

OneBEV
├── onebev
├── configs
├── pretrained
│   ├── vssmtiny_dp01_ckpt_epoch_292.pth
├── data
│   ├── Nuscenes-360
│   │   ├── train
│   │   │   ├── *.jpg
│   │   ├── val
│   │   │   ├── *.jpg
│   │   ├── bev
│   │   │   ├── *.h5
│   │   ├── nusc_infos_train_mmengine.pkl
│   │   ├── nusc_infos_val_mmengine.pkl
│   ├── DeepAccident-360
│   │   ├── train
│   │   │   ├── *.jpg
│   │   ├── val
│   │   │   ├── *.jpg
│   │   ├── bev
│   │   │   ├── *.h5
│   │   ├── deep_infos_train_mmengine.pkl
│   │   ├── deep_infos_val_mmengine.pkl
├── tools
├── runs
├── README.md

Checkpoints

Download from Google Drive

Usage

Prepare

Please use the following command to generate pkl files and download pretrained weights for backbone:

cd /path/to/onebev
python tools/create_pkl/create_data.py <DATASET> --root-path <DATASET_PATH> --version trainval
bash pretrained/download.sh

<DATASET> should be 'nusc' or 'deep'. <DATASET_PATH> should be the path of NuScenes-360 or DeepAccident-360.

Train

Please use the following command to train the model:

bash tools/dist_train.sh configs/onebev/model_onebev_nusc_50epochs.py <GPU_NUM>
bash tools/dist_train.sh configs/onebev/model_onebev_deep_50epochs.py <GPU_NUM>

Test

Please use the following command to test the model:

bash tools/dist_test.sh configs/onebev/model_onebev_nusc_50epochs.py <CHECKPOINT_PATH> <GPU_NUM>
bash tools/dist_test.sh configs/onebev/model_onebev_deep_50epochs.py <CHECKPOINT_PATH> <GPU_NUM>

<CHECKPOINT_PATH> should be the path of the checkpoint file.

References

We appreciate the previous open-source works.

Citation

If you are interested in this work, please cite as below:

@inproceedings{wei2024onebev,
  title={OneBEV: Using One Panoramic Image for Bird's-Eye-View Semantic Mapping},
  author={Wei, Jiale and Zheng, Junwei and Liu, Ruiping and Hu, Jie and Zhang, Jiaming and Stiefelhagen, Rainer},
  booktitle={ACCV},
  year={2024}
}