Home

Awesome

TATrack

Target-Aware Tracking with Long-term Context Attention has been accepted by AAAI23.

Law Result and Weights: https://drive.google.com/drive/folders/1PqiciVkwmtD9VCRkHVhZLsLA6cuz5oF1?usp=share_link

Setup

conda create -n TATrack python=3.9 -y
conda activate TATrack
conda install pytorch torchvision cudatoolkit -c pytorch

pip install -r requirements.txt

Test

├── TATrack
|   ├── ...
|   ├── ...
|   ├── datasets
|   |   ├── COCO -> /opt/data/COCO
|   |   ├── GOT-10k -> /opt/data/GOT-10k
|   |   ├── LaSOT -> /opt/data/LaSOT/LaSOTBenchmark
|   |   ├── OTB
|   |   |   └── OTB2015 -> /opt/data/OTB2015
|   |   ├── TrackingNet -> /opt/data/TrackingNet
|   |   ├── UAV123 -> /opt/data/UAV123/UAV123
|   |   ├── VOT
|   |   |   ├── vot2018
|   |   |   |   ├── VOT2018 -> /opt/data/VOT2018
|   |   |   |   └── VOT2018.json

i. Star notation(*): just for training. You can ignore these datasets if you just want to test the tracker.

ii. In this case, we create soft links for every dataset. The real storage location of all datasets is /opt/data/. You can change them according to your situation.

<!-- * Download the models we trained. --> <!-- * Use the path of the trained model to set the `pretrain_model_path` item in the configuration file correctly, then run the shell command. -->

General command format

python main/test.py --config testing_dataset_config_file_path

Take GOT-10k as an example:

python main/test.py --config experiments/tatrack/test/base/got.yaml

Training

training based on the GOT-10k benchmark

python main/train.py --config experiments/tatrack/train/base-got.yaml

training with full data

python main/train.py --config experiments/tatrack/train/base.yaml

BibTeX

@article{he2023target, title={Target-Aware Tracking with Long-term Context Attention}, author={He, Kaijie and Zhang, Canlong and Xie, Sheng and Li, Zhixin and Wang, Zhiwen}, journal={arXiv preprint arXiv:2302.13840}, year={2023} }