Home

Awesome

PWC

PiVOT :unicorn:

This is a Generic Object Tracking Project.

:fire: PiVOT has been accepted at TMM 2024! 👇

Getting started

This is the official repository for "Improving Visual Object Tracking through Visual Prompting."

PiVOT proposes a prompt generation network with the pre-trained foundation model CLIP to automatically generate and refine visual prompts, enabling the transfer of foundation model knowledge for tracking.

Raw Results

The raw results can be downloaded from here.

DatasetModelNPrSucPrOP50OP75
NfS-30ToMP-5084.0066.8680.5884.3653.50
SeqTrack-L84.3565.4681.9382.3748.69
PiVOT-L86.6668.2284.5386.0555.45
LaSOTToMP-5077.9867.5772.2479.7965.06
SeqTrack-L81.5372.5179.2582.9872.68
PiVOT-L84.6873.3782.0985.6475.18
AVisTToMP-5066.6651.6147.7459.4738.88
PiVOT-L81.2062.1865.5573.2555.46
UAV123ToMP-5084.7968.9789.7083.8464.63
SeqTrack-L85.8369.6791.3584.9863.31
PiVOT-L86.7470.6691.8085.6967.06
OTB-100ToMP-5085.9870.0790.8387.8357.79
PiVOT-L88.4671.2094.5889.3555.73

Suc: Success Rate
Pr: Precision
NPr: Normalise Precision

Prerequisites

The codebase is built based on PyTracking.

Familiarity with the PyTracking codebase will help in understanding the structure of this project.

Installation

Clone the GIT repository.

git clone https://github.com/chenshihfang/GOT.git

Ensure that CUDA 11.7 is installed.

Install dependencies

sudo apt-get install libturbojpeg

Run the installation script to install all the dependencies. You need to provide the conda install path and the name for the created conda environment

bash install_PiVOT.sh /your_anaconda3_path/ got_pivot
conda activate got_pivot

Set Up the Dataset Environment

You can follow the setup instructions from PyTracking.

There are two different local.py files located in:

Evaluate the Tracking Performance Based on Datasets

python evaluate_PiVOT_results.py  

Pretrained Model

The pretrained model can be downloaded from here.

Evaluate the Tracker

  1. First, set the parameter self.infer to True in ltr/models/tracking/tompnet.py.

  2. Second, set up the Pretrained Model path in pytracking/pytracking/parameter/tomp/pivotL27.py.

  3. Then execute the following command:

    CUDA_VISIBLE_DEVICES=0 python pytracking/run_experiment.py myexperiments_pivot pivot --debug 0 --threads 1
    

Training

  1. First, set the parameter self.infer to False in: ltr/models/tracking/tompnet.py

  2. Then, proceed with the following stages:

    Stage 1:

    python ltr/run_training.py tomp tomp_L_27
    

    Stage 2: Place the tomp_L_27 checkpoint in: ltr/train_settings/tomp/pivot_L_27.py

    Then run:

    python ltr/run_training.py tomp pivot_L_27
    

Acknowledgement

This codebase is implemented on PyTracking libraries.

Citing PiVOT

If you find this repository useful, please consider giving a star :star: and a citation

@inproceedings{got2024pivot,
title     = {Improving Visual Object Tracking through Visual Prompting},
author    = {Shih-Fang Chen and Jun-Cheng Chen and I-Hong Jhuo and Yen-Yu Lin},
booktitle = {Proc. {arXiv:2409.18901}},
year      = {2024}
}

We will update the DOI for TMM after TMM announces it.

@misc{PiVOT,
title={Improving Visual Object Tracking through Visual Prompting},
author={Chen, Shih-Fang and Chen, Jun-Cheng and Jhuo, I-Hong and Lin, Yen-Yu},
year={to appear in IEEE Transactions on Multimedia},
publisher={IEEE}
}