Home

Awesome

Open3DSOT

A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

The official code release of BAT and M2 Track.

Features

:mega: The extension of M2-Track is accepted by TPAMI! :point_down:

:mega: One tracking paper is accepted by CVPR2022 (Oral)! :point_down:

:mega: The codes for M2-Track is now available.

Trackers

This repository includes the implementation of the following models:

M2-Track (CVPR2022 Oral)

[Paper] [Project Page]

M2-Track is the first motion-centric tracker in LiDAR SOT, which robustly handles distractors and drastic appearance changes in complex driving scenes. Unlike previous methods, M2-Track is a matching-free two-stage tracker which localizes the targets by explicitly modeling the "relative target motion" among frames.

<p align="center"> <img src="figures/mmtrack.png" width="800"/> </p> <p align="center"> <img src="figures/results_mmtrack.gif" width="800"/> </p>

BAT (ICCV2021)

[Paper] [Results]

Official implementation of BAT. BAT uses the BBox information to compensate the information loss of incomplete scans. It augments the target template with box-aware features that efficiently and effectively improve appearance matching.

<p align="center"> <img src="figures/bat.png" width="800"/> </p> <p align="center"> <img src="figures/results.gif" width="800"/> </p>

P2B (CVPR2020)

[Paper] [Official implementation]

Third party implementation of P2B. Our implementation achieves better results than the official code release. P2B adapts SiamRPN to 3D point clouds by integrating a pointwise correlation operator with a point-based RPN (VoteNet).

<p align="center"> <img src="figures/p2b.png" width="800"/> </p>

Setup

Installation

KITTI dataset

NuScenes dataset

Note: We use the train_track split to train our model and test it with the val split. Both splits are officially provided by NuScenes. During testing, we ignore the sequences where there is no point in the first given bbox.

Waymo dataset

  python datasets/generate_waymo_sot.py

Quick Start

Training

To train a model, you must specify the .yaml file with --cfg argument. The .yaml file contains all the configurations of the dataset and the model. We provide .yaml files under the cfgs directory. Note: Before running the code, you will need to edit the .yaml file by setting the path argument as the correct root of the dataset.

CUDA_VISIBLE_DEVICES=0,1 python main.py  --cfg cfgs/M2_track_kitti.yaml  --batch_size 64 --epoch 60 --preloading

For M2-Track, we use the same configuration for all categories. By default, the .yaml is used to trained a Car tracker. You need to change the category_name in the .yaml file to train for another category.

In this version, we remove the --gpus flag. And all the available GPUs will be used by default. You can use CUDA_VISIBLE_DEVICES to select specific GPUs.

After you start training, you can start Tensorboard to monitor the training process:

tensorboard --logdir=./ --port=6006

By default, the trainer runs a full evaluation on the full test split after training every epoch. You can set --check_val_every_n_epoch to a larger number to speed up the training. The --preloading flag is used to preload the training samples into the memory to save traning time. Remove this flag if you don't have enough memory.

Testing

To test a trained model, specify the checkpoint location with --checkpoint argument and send the --test flag to the command.

python main.py  --cfg cfgs/M2_track_kitti.yaml  --checkpoint /path/to/checkpoint/xxx.ckpt --test

Reproduction

ModelCategorySuccessPrecisionCheckpoint
BAT-KITTICar65.3778.88pretrained_models/bat_kitti_car.ckpt
BAT-NuScenesCar40.7343.29pretrained_models/bat_nuscenes_car.ckpt
BAT-KITTIPedestrian45.7474.53pretrained_models/bat_kitti_pedestrian.ckpt
M2Track-KITTICar67.4381.04pretrained_models/mmtrack_kitti_car.ckpt
M2Track-KITTIPedestrian60.6189.39pretrained_models/mmtrack_kitti_pedestrian.ckpt
M2Track-NuScenesCar57.2265.72pretrained_models/mmtrack_nuscenes_car.ckpt

Trained models are provided in the pretrained_models directory. To reproduce the results, simply run the code with the corresponding .yaml file and checkpoint. For example, to reproduce the tracking results on KITTI Car of M2-Track, just run:

python main.py  --cfg cfgs/M2_track_kitti.yaml  --checkpoint ./pretrained_models/mmtrack_kitti_car.ckpt --test

The reported results of M2-Track checkpoints are produced on 3090/3080ti GPUs. Due to the precision issues, there could be minor differences if you test them with other GPUs.

Acknowledgment

License

This repository is released under MIT License (see LICENSE file for details).