Home

Awesome

Track Every Thing Accuracy

Track Every Thing in the Wild [ECCV 2022].

This is the official implementation of TETA metric describe in the paper. This repo is an updated version of the original TET repo for TETA metric.

<img src="figures/figure_1.png" width="600">

Introduction

TETA is a new metric for tracking evaluation that breaks tracking measurement into three sub-factors: localization, association, and classification, allowing comprehensive benchmarking of tracking performance even under inaccurate classification. TETA also deals with the challenging incomplete annotation problem in large-scale tracking datasets. Instead of using the predicted class labels to group per-class tracking results, we use location with the help of local cluster evaluation. We treat each ground truth bounding box of the target class as the anchor of each cluster and group prediction results inside each cluster to evaluate the localization and association performance. Our local clusters enable us to evaluate tracks even when the class prediction is wrong.

<img src="figures/teta-teaser.png" width="400">

Why you should use TETA metric

TETA is designed to evaluate multiple object tracking (MOT) and segmentation (MOTS) in large-scale multiple classes and open-vocabulary scenarios. It has been widely used for evaluate trackers performance on BDD100K and TAO datasets. Some key features of TETA are:

Install

Install the TETA environment using pip.

pip install -r requirements.txt

Go to the root of the teta folder and quick install by

pip install -e .

Support data format

COCO-VID format

Result format follows COCO-VID format. We describe the format in detail here

Scalabel format

For evaluate MOT and MOTS on BDD100K, we support Scalabel format. We describe the format in detail here

Evaluate on TAO TETA benchmark

Overall you can run following command to evaluate your tracker on TAO TETA benchmark, given the ground truth json file and the prediction json file using COCO-VID format.

python scripts/run_tao.py --METRICS TETA --TRACKERS_TO_EVAL $NAME_OF_YOUR_MODEL$ --GT_FOLDER ${GT_JSON_PATH}.json --TRACKER_SUB_FOLDER ${RESULT_JSON_PATH}.json   

TAO TETA v0.5

Please note, TAO benchmark initially aligns its class names with LVISv0.5 which has 1230 classes. For example, the initial TETA benchmark on TET paper is using v0.5 class names.

Example Run:

python scripts/run_tao.py --METRICS TETA --TRACKERS_TO_EVAL my_tracker --GT_FOLDER ./jsons/tao_val_lvis_v05_classes.json --TRACKER_SUB_FOLDER ./jsons/teter-swinL-tao-val.json

TAO TETA v1.0

Since LVIS update the class names to v1.0, we also provide TAO Val Ground Truth json file in v1.0 format tao_val_lvis_v1_classes.json The conversion script is provided in the scripts if you want to convert the v0.5 class names to v1.0 class names by yourself.

Example Run:

python scripts/run_tao.py --METRICS TETA --TRACKERS_TO_EVAL my_tracker --GT_FOLDER ./jsons/tao_val_lvis_v1_classes.json --TRACKER_SUB_FOLDER ./jsons/masa-gdino-detic-dets-tao-val-preds.json

Evaluate on Open-Vocabulary MOT benchmark

Open-Vocabulary MOT benchmark is first introduced by OVTrack. Here we provide the evaluation script for Open-Vocabulary MOT benchmark. Open-Vocabulary MOT benchmark uses TAO dataset as the evaluation dataset and use LVIS v1.0 class names.

Overall, you can use follow command to evaluate your trackers on Open-Vocabulary MOT benchmark.

python scripts/run_ovmot.py --METRICS TETA --TRACKERS_TO_EVAL $NAME_OF_YOUR_MODEL$ --GT_FOLDER ${GT_JSON_PATH}.json --TRACKER_SUB_FOLDER ${RESULT_JSON_PATH}.json   

Run on Open-Vocabulary MOT validation set

python scripts/run_ovmot.py --METRICS TETA --TRACKERS_TO_EVAL my_tracker --GT_FOLDER ./jsons/tao_val_lvis_v1_classes.json --TRACKER_SUB_FOLDER ./jsons/masa-gdino-detic-dets-tao-val-preds.json  

Run on Open-Vocabulary MOT test set

Evaluate on BDD100K MOT TETA benchmark

Run on BDD100K MOT val dataset.

python scripts/run_bdd.py --scalabel_gt data/bdd/annotations/scalabel_gt/box_track_20/val/ --resfile_path ./jsons/masa_sam_vitb_bdd_mot_val.json --metrics TETA HOTA CLEAR 

Evaluate on BDD100K MOTS TETA benchmark

Run on BDD100K MOTS val dataset.

python scripts/run_bdd.py --scalabel_gt data/bdd/annotations/scalabel_gt/seg_track_20/val/ --resfile_path ./jsons/masa_sam_vitb_bdd_mots_val.json --metrics TETA HOTA CLEAR --with_mask

Citation

@InProceedings{trackeverything,
  title = {Tracking Every Thing in the Wild},
  author = {Li, Siyuan and Danelljan, Martin and Ding, Henghui and Huang, Thomas E. and Yu, Fisher},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  month = {Oct},
  year = {2022}
}