Home

Awesome

PillarTrack: Redesigning Pillar-based Transformer Network for Single Object Tracking on Point Clouds

Overview

<!-- - [Citation](#citation) -->

Introduction

This is the official code release of the paper PillarTrack: Redesigning Pillar-based Transformer Network for Single Object Tracking on Point Clouds

Abstract

LiDAR-based 3D single object tracking (3D SOT) is a critical issue in robotics and autonomous driving. It aims to obtain accurate 3D BBox from the search area based on similarity or motion. However, existing 3D SOT methods usually follow the point-based pipeline, where the sampling operation inevitably leads to redundant or lost information, resulting in unexpected performance. To address these issues, we propose PillarTrack, a pillar-based 3D single object tracking framework. Firstly, we transform sparse point clouds into dense pillars to preserve the local and global geometrics. Secondly, we introduce a Pyramid-type Encoding Pillar Feature Encoder (PE-PFE) design to help the feature representation of each pillar. Thirdly, we present an efficient Transformer-based backbone from the perspective of modality differences. Finally, we construct our PillarTrack tracker based above designs. Extensive experiments on the KITTI and nuScenes dataset demonstrate the superiority of our proposed method. Notably, our method achieves state-of-the-art performance on the KITTI and nuScenes dataset and enables real-time tracking speed.

<img src="doc/pipeline.jpg" />

Performance

Kitti Dataset

CarPedVanCyclistMean
Success74.259.743.477.465.3
Precision85.184.751.794.282.2

NuScenes Dataset

CarPedTruckTrailerBusMean
Success47.1234.1854.8257.7044.6844.59
Precision57.7264.9354.4154.6340.7358.86
<img src="doc/visual.jpg" /> ## Setup

Installation

conda create -n pillartrack python=3.8 -y
conda activate pillartrack

pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

# please refer to https://github.com/traveller59/spconv
pip install spconv-cu111

git clone https://github.com/StiphyJay/PillarTrack.git
cd pillartrack
pip install -r requirements.txt

python setup.py develop

Dataset preparation

Download the dataset from KITTI Tracking, nuScenes Full Dataset and organize the downloaded files as follows:

pillartrack                                           
|-- data                                     
|   |-- kitti
│   │    ├── calib
│   │    ├── label_02
│   │    └── velodyne
│   │-- nuscenes
│   │    ├── v1.0-trainval
│   │    ├── samples
│   │    ├── sweeps
│   │    └──  maps 

QuickStart

Train

For training, you can customize the training by modifying the parameters in the yaml file of the corresponding model, such as 'CLASS_NAMES'.

After configuring the yaml file, run the following command to parser the path of config file and the training tag.

cd pillartrack/tools
python train_truck.py --cfg_file $model_config_path --extra_tag $cate

For training with ddp, you can execute the following command ( ensure be root dir ):

cd pillartrack/tools
bash dist_train.sh $NUM_GPUs --cfg_file $model_config_path

Eval

cd pillartrack/tools
# for single model
python test_truck.py --cfg_file $model_config_path --ckpt $your_saved_ckpt
# for all saved model
python test_truck.py --cfg_file $model_config_path --ckpt $your_saved_ckpt --eval_all

For now, the code does not support ddp eval.

Acknowledgment

Citation

If you find the project useful for your research, please cite our article,

@article{xu2024pillartrack,
  title={PillarTrack: Redesigning Pillar-based Transformer Network for Single Object Tracking on Point Clouds},
  author={Xu, Weisheng and Zhou, Sifan and Yuan, Zhihang},
  journal={arXiv preprint arXiv:2404.07495},
  year={2024}
}