Home

Awesome

TrackLab

TrackLab is an easy-to-use modular framework for multi-object pose/segmentation/bbox tracking that supports many tracking datasets and evaluation metrics.

<p align="center"> <img src="docs/assets/gifs/PoseTrack21_008827.gif" width="400" /> <img src="docs/assets/gifs/PoseTrack21_016236.gif" width="400" /> </p>

News

Upcoming

🤝 How You Can Help

The TrackLab library is in its early stages, and we're eager to evolve it into a robust, mature tracking framework that can benefit the wider community. If you're interested in contributing, feel free to open a pull-request or reach out to us!

Introduction

Welcome to this official repository of TrackLab, a modular framework for multi-object tracking. TrackLab is designed for research purposes and supports many types of detectors (bounding boxes, pose, segmentation), datasets and evaluation metrics. Every component of TrackLab, such as detector, tracker, re-identifier, etc, is configurable via standard yaml files (Hydra config framework) TrackLab is designed to be easily extended to support new methods.

TrackLab is composed of multiple modules:

  1. A detector (YOLOv8, ...)
  2. A re-identification model (BPBReID, ...)
  3. A tracker (DeepSORT, StrongSORT, OC-SORT, ...)

Here's what makes TrackLab different from other existing tracking frameworks:

Documentation

You can find the documentation at https://trackinglaboratory.github.io/tracklab/ or in the docs/ folder. After installing, you can run make html inside this folder to get an html version of the documentation.

Installation guide1

Clone the repository

git clone https://github.com/TrackingLaboratory/tracklab.git
cd tracklab

Manage the environment

Create and activate a new environment

conda create -n tracklab pip python=3.10 pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.7 -c pytorch -c nvidia -y
conda activate tracklab

You might need to change your torch installation depending on your hardware. Please check on Pytorch website to find the right version for you.

Install the dependencies

Get into your repo and install the requirements with :

pip install -e .
mim install mmcv==2.0.1

You might need to redo this if you update the repository, and some dependencies changed.

External dependencies

Setup

You will need to set up some variables before running the code :

  1. In configs/config.yaml :
    • data_dir: the directory where you will store the different datasets (must be an absolute path !)
    • All the parameters under the "Machine configuration" header
  2. In the corresponding modules (tracklab/configs/modules/.../....yaml) :
    • The batch_size
    • You might want to change the model hyperparameters

To launch TrackLab with the default configuration defined in configs/config.yaml, simply run:

tracklab

This command will create a directory called outputs which will have a ${experiment_name}/yyyy-mm-dd/hh-mm-ss/ structure. All the output files (logs, models, visualization, ...) from a run will be put inside this directory.

If you want to override some configuration parameters, e.g. to use another detection module or dataset, you can do so by modifying the corresponding parameters directly in the .yaml files under configs/.

All parameters are also configurable from the command-line, e.g. : (more info on Hydra's override grammar here)

tracklab 'data_dir=${project_dir}/data' 'model_dir=${project_dir}/models' modules/reid=bpbreid pipeline=[bbox_detector,reid,track]

${project_dir} is a variable that is configured to be the root of the project you're running the code in. When using it in a command, make sure to use single quotes (') as they would otherwise be seen as environment variables.

To find all the (many) configuration options you have, use :

tracklab --help

The first section contains the configuration groups, while the second section shows all the possible options you can modify.

Framework overview

Hydra Configuration

TODO Describe TrackLab + Hydra configuration system

Architecture Overview

Here is an overview of the important TrackLab classes:

Execution Flow Overview

Here is an overview of what happen when you run TrackLab: tracklab/main.py is the main entry point and receives the complete Hydra's configuration as input. tracklab/main.py is usually called via the following command through the root main.py file: python main.py. Within tracklab/main.py, all modules are first instantiated. Then training any tracking module (e.g. the re-identification model) on the tracking training set is supported by calling the "train" method of the corresponding module. Tracking is then performed on the validation or test set (depending on the configuration) via the TrackingEngine.run() function. For each video in the evaluated set, the TrackingEngine calls the "run" method of each module (e.g. detector, re-identifier, tracker, ...) sequentially. The TrackingEngine is responsible for batching the input data (e.g. images, detections, ...) before calling the "run" method of each module with the correct input data. After a module has been called with a batch of input data, the TrackingEngine then updates the TrackerState object with the module outputs. At the end of the tracking process, the TrackerState object contains the tracking results of each video. Visualizations (e.g. .mp4 results videos) are generated during the TrackingEngine.run() call, after a video has been tracked and before the next video is processed. Finally, evaluation is performed via the evaluator.run() function once the TrackingEngine.run() call is completed, i.e. after all videos have been processed.

Tutorials

Dump and load the tracker state to save computation time

When developing a new module, it is often useful to dump the tracker state to disk to save computation time and avoid running the other modules several times. Here is how to do it:

  1. First, save the tracker state by using the corresponding configuration in the config.yaml file:
defaults:
    - state: save
# ...
state:
  save_file: "states/${experiment_name}.pklz"  # 'null' to disable saving. This is the save path for the tracker_state object that contains all modules outputs (bboxes, reid embeddings, jersey numbers, roles, teams, etc)
  load_file: null
  1. Run Tracklab. The tracker state will be saved in the experiment folder as a .pklz file.
  2. Then modify the load_file key in "config.yaml" to specify the path to the tracker state file that has just been created (load_file: "..." config).
  3. In config.yaml, remove from the pipeline all modules that should not be executed again. For instance, if you want to use the detections and reid embeddings from the saved tracker state, remove the "bbox_detector" and "reid" modules from the pipeline. Use pipeline: [] if no module should be run again.
defaults:
    - state: save
# ...
pipeline:
  - track
# ...
state:
  save_file: null  # 'null' to disable saving. This is the save path for the tracker_state object that contains all modules outputs (bboxes, reid embeddings, jersey numbers, roles, teams, etc)
  load_file: "path/to/tracker_state.pklz"
  1. Run Tracklab again.

Citation

If you use this repository for your research or wish to refer to our contributions, please use the following BibTeX entries:

TrackLab:

@misc{Joos2024Tracklab,
	title = {{TrackLab}},
	author = {Joos, Victor and Somers, Vladimir and Standaert, Baptiste},
	journal = {GitHub repository},
	year = {2024},
	howpublished = {\url{https://github.com/TrackingLaboratory/tracklab}}
}

SoccerNet Game State Reconstruction:

@inproceedings{Somers2024SoccerNetGameState,
        title = {{SoccerNet} Game State Reconstruction: End-to-End Athlete Tracking and Identification on a Minimap},
        author = {Somers, Vladimir and Joos, Victor and Giancola, Silvio and Cioppa, Anthony and Ghasemzadeh, Seyed Abolfazl and Magera, Floriane and Standaert, Baptiste and Mansourian, Amir Mohammad and Zhou, Xin and Kasaei, Shohreh and Ghanem, Bernard and Alahi, Alexandre and Van Droogenbroeck, Marc and De Vleeschouwer, Christophe},
        booktitle = cvsports,
        month = Jun,
        year = {2024},
        address = city-seattle,
}

BPBreID:

@article{bpbreid,
    archivePrefix = {arXiv},
    arxivId = {2211.03679},
    author = {Somers, Vladimir and {De Vleeschouwer}, Christophe and Alahi, Alexandre},
    doi = {10.48550/arxiv.2211.03679},
    eprint = {2211.03679},
    file = {:Users/vladimirsomers/Library/CloudStorage/OneDrive-SportradarAG/PhD/Scientific Literature/Mendeley/Somers, De Vleeschouwer, Alahi_2023_Body Part-Based Representation Learning for Occluded Person Re-Identification.pdf:pdf},
    isbn = {2211.03679v1},
    journal = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV23)},
    month = {nov},
    title = {{Body Part-Based Representation Learning for Occluded Person Re-Identification}},
    url = {https://arxiv.org/abs/2211.03679v1 http://arxiv.org/abs/2211.03679},
    year = {2023}
}

Footnotes

  1. Tested on conda 22.11.1, Python 3.10.8, pip 22.3.1, g++ 11.3.0 and gcc 11.3.0