Home

Awesome

GHOST Tracker

This is the official repository of Simple Cues Lead to a Strong Multi-Object Tracker.

Git Repo

To set up this repository follow the following steps

git clone https://github.com/dvl-tum/GHOST.git
cd GHOST
git clone https://github.com/dvl-tum/TrackEvalForGHOST.git

Environment

Download anaconcda and create conda evironment using env_from_history.yml file by:

conda env create -f env_from_history.yml

Then activate the environment using:

conda activate GHOST

Dataset Setup

Download MOT17, MOT20, and DanceTrack tracking datasets. For BDD100k download MOT 2020 Labels and MOT 2020 images. Unzip all of them to datasets.

Finally also download our detections used and also extract into dataset. For MOT17 we also provide the bounding boxes from various trackers on the validation set, i.e., first half of all training sequences.

The final data structure should look like the following:

datasets/
    - bdd100k
        - images
            - track
                - train
                - val
                - test
        - labels
            - box_track_20
                - train
                - val
    - DanceTrack
        - train
        - val
        - test
    - MOT17
        - train
        - test
    - MOT20
        - train
        - test
    - detections_GHOST
        - bdd100k
            - train
            - val
            - test
        - DanceTrack
            - val
            - test
        - MOT17
            - train
            - test
        - MOT20
            - train
            - test

ReID Setup

Download our pretrained ReID weights and extract them into ReID/trained_models so that in the end the data structure looks like the the following:

ReID/
    * trained_models
        * market_models
            * resnet50_Market.pth
        * MOT_models
            * split1_resnet50_MOT17.pth
            * split2_resnet50_MOT17.pth
            * split3_resnet50_MOT17.pth

The final model that we use is market_models/resnet50_Market.pth.

To re-train the ReID model on Market-1501 or MOT17, we provide the dataset structures we used for Market and MOT17. Please unzip them into the following structure:

ReID/
    * datasets
        * MOT17_ReID
        * Market-1501-v15.09.15

Tracking

To run our tracker run on MOT17 private detections run:

bash scripts/main_17.sh

and to run in with public center track preprocessed detections run:

bash scripts/main_17_pub.sh

Similarly, you can find scripts for main_20.sh, main_20_pub.sh using tracktor preprocessed detections, main_dance.sh, and main_bdd.sh in the scripts directory.

You can define the following parameters directly in the bash file:

Parameterdescription
--config_pathPath to config file
--det_confMinimum detection confidence
--actMatching threshold for active tracks
--inactMatching threshold for inactive tracks
--det_fileDetections to be used (see dataset/detections_GHOST for names)
--only_pedestrianIf only pedestrian class should be used for evaluation
--inact_patiencePatience for inactive tracks to be used during tracking
--combiHow to combine motion and appearance distance (sum_0.3 means weighted sum with motion weight 0.3)
--store_featsStore features for analysis
--on_the_flyIf using on the fly domain adaptation
--do_inactIf using proxy distance / proxy feature compuatation for inactive tracks
--splitsWhich split to use (see data/splits.py for different splits)
--len_threshMinimum length of tracks (default set to 0)
--new_track_confConfidence threshold for detection to start new track
--remove_unconfirmedIf removing unconfirmed tracks (tracks that are only initialized and then no detection added in next frame, default is 0)
--last_n_framesNumber of last frames used to compute velocity for motion model

For others, like data paths, please refer directly to the config files in config/.

Test submission

For the test submissions please adapt the split in the configuration parameters to the corresponding splits (data/splits.py).

MOT17, MOT20, DanceTrack

For submission please zip the files in the corresponding output directories and submit to the test servers of MOT17, MOT20, DanceTrack.

BDD

If you want to submit to BDD server, please utilize the corresponding experiment directory in the bdd_for_submission directory that is directly generated, zip the files directly (not the directory), and upload under Submit to the server.

Using distance computations

If you want to use different distance computations than the current proxy distance computation, you have to change the avg_act and avg_inact sections in the config files the following for other proxy distances:

DonumproxyDescription
11'each_sample'Min of distances between features of new detection and features of all prior detections in track
12'each_sample'Mean of distances between features of new detection and features of all prior detections in track
13'each_sample'Max of distances between features of new detection and features of all prior detections in track
14'each_sample'(Min + Max)/2 of distances between features of new detection and features of all prior detections in track
15'each_sample'Median of distances between features of new detection and features of all prior detections in track
1x'first'Uses the features of the first detection in the track for distance computation with the features of the new detection, x does not matter
1x'last'Uses the features of the last detection in the track for distance computation with the features of the new detection, x does not matter
1x'mv_avg'Uses the moving average of features in the track for distance computation with the features of the new detection, x is the update weight
1x'mean'Uses the mean of features in the track for distance computation with the features of the new detection, x number of last detections to be used
1x'median'Uses the median of features in the track for distance computation with the features of the new detection, x number of last detections to be used
1x'mode'Uses the mode of features in the track for distance computation with the features of the new detection, x number of last detections to be used

If you set do to 0 it falls back to using the features of the last detection in a track.

Histogram and Quantile analysis

If you want to run the histogram or quantile analysis as in the paper please first run an experiment on the detection set you want to use. The features will be stored to features. Then run the

python tools/investigate_features.py

the corresponding figures will be stored to histograms_features.