Home

Awesome

DaDiff: Domain-aware Diffusion Model for Nighttime UAV Tracking

Haobo Zuo, Changhong Fu, Guangze Zheng, Liangliang Yao, Kunhan Lu, and Jia Pan. DaDiff: Domain-aware Diffusion Model for Nighttime UAV Tracking.

featured

Overview

DaDiff is a Diffusion model-based domain adaptation framework for visual object tracking. This repo contains its Python implementation.

Environment

This code has been tested on Ubuntu 18.04, Python 3.8.3, Pytorch 0.7.0/1.6.0, CUDA 10.2. Please install related libraries before running this code:

pip install -r requirements.txt

Testing DaDiff

1. Preprocessing

Before training, we need to preprocess the training data to generate training pairs. Besides, the proposed NUT-LR can be obtained from the following link to test the performance of DaDiff.

  1. Download the nighttime train dataset NAT2021-train set.

  2. Follow the preprocessing of UDAT to prepare the nighttime train dataset.

  3. Download the proposed NUT-LR for low-resolution object nighttime UAV tracking.

2. Train

Take DaDiff-GAT for instance.

  1. Apart from the above target domain dataset NAT2021, you need to download and prepare source domain datasets VID and GOT-10K.

  2. Download the pre-trained daytime model (SiamGAT/SiamBAN) and place it at DaDiff/SiamGAT/snapshot.

  3. Start training

    cd DaDiff/SiamGAT
    export PYTHONPATH=$PWD
    python tools/train.py
    

3. Test

Take DaDiff-GAT for instance.

  1. For quick test, you can download our trained model for DaDiff-GAT (or DaDiff-BAN) and place it at DaDiff/SiamGAT/snapshot.

  2. Download testing datasets and put them into your own directory. If you want to test DaDiff on a new dataset, please refer to the toolkit to set the test dataset.

  3. Start testing

    python tools/test.py --dataset NUT-L
    

Demo

Demo Video

Acknowledgments

We sincerely thank the contribution of the following repos: DDIM, SiamGAT, SiamBAN, and UDAT.

Contact

The official code of DaDiff will continue to be regularly refined and improved to ensure its quality and functionality. If you have any questions, please contact Haobo Zuo at haobozuo@connect.hku.hk or Changhong Fu at changhongfu@tongji.edu.cn.