Home

Awesome

BLSTM-MTP

This repository contains the official Tensorflow implementation of Discriminative Appearance Modeling with Multi-track Pooling for Real-time Multi-object Tracking (CVPR 2021).

Dependencies

The code has been tested with:

Download data

  1. Download the MOT17 Challenge dataset from this link. The zip file includes MOT Challenge public detections processed by Tracktor. We use this version of the public detections in our tracking demo below.
  2. If you already downloaded the dataset from the official MOT Challenge website before, please download the data from this link instead which doesn't include the image files.

Demo

  1. Set DATASET_DIR in config_tracker.py to your own directory where the dataset you download above is located.
  2. If you want to write the tracking output as images as well, set IS_VIS in config_tracker.py to True. Otherwise, leave it as it is.
  3. Download the model file from here and unzip the file. Use the location where the checkpoint file is located as model_path in the command below.
  4. Run the following command. Use your own paths for model_path and output_path. As for detector, you can use one of DPM, FRCNN, and SDP.
python run_tracker.py --model_path=YOUR_MODEL_FOLDER/model.ckpt --output_path=YOUR_OUTPUT_FOLDER  --detector=FRCNN --threshold=0.5 --network_type=appearance_motion_network
  1. This command will generate the tracking result that is shown in Table 6 of our paper. You can use these files to verify your output files.

Performance

When paired with Tracktor or CenterTrack, our method greatly improves the tracking performance in terms of IDF1 and IDS.

MethodIDF1MOTAIDSMTMLFragFPFN
Tracktor++v255.156.31,98721.135.33,7638,866235,449
Ours + Tracktor++v260.555.91,18820.536.74,1858,663238,863

The data file that you download in the instructions above also includes MOT Challenge detections processed by CenterTrack (centertrack_prepr_det.txt). In order to use it as input to the tracker, you can simply change run_tracker.py in a way that it reads detections from centertrack_prepr_det.txt instead of tracktor_prepr_det.txt. The following is the result obtained by using the public detections processed by CenterTrack.

MethodIDF1MOTAIDSMTMLFragFPFN
CTTrackPub59.661.52,58326.431.94,96514,076200,672
Ours + CTTrackPub62.962.01,75027.931.07,43317,621194,946

With NVIDIA TITAN Xp, the inference code runs at around 24 fps on the MOT17 Challenge test set (excluding time spent on I/O operations).

Training

The training code will be released soon in the future release. Stay tuned for more updates.

License

The code is released under the MIT License.

Contact

If you have any questions, please contact me at chkim@gatech.edu.

Citation

@InProceedings{Kim_2021_CVPR,
    author    = {Kim, Chanho and Fuxin, Li and Alotaibi, Mazen and Rehg, James M.},
    title     = {Discriminative Appearance Modeling With Multi-Track Pooling for Real-Time Multi-Object Tracking},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {9553-9562}
}