Home

Awesome

SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation

Liangliang Yaoโ€ , Haobo Zuoโ€ , Guangze Zhengโ€ , Changhong Fu*, Jia Pan

โ€  Equal contribution. * Corresponding author.

Vision4robotics

๐Ÿ—๏ธ Framework

Framework

๐Ÿ‘€ Visualization of SAM-DA

One-to-many generation

๐Ÿ“… Todo

๐Ÿ› ๏ธ Installation

This code has been tested on Ubuntu 18.04, Python 3.8.3, Pytorch 1.13.1, and CUDA 11.6. Please install related libraries before running this code:

Install Segment Anything:

bash install.sh

Install SAM-DA-Track:

pip install -r requirements.txt

๐Ÿ˜€ Getting started

Test SAM-DA

cd tracker/BAN
python tools/test.py 
python tools/eval.py
cd tracker/BAN
python tools/test.py --snapshot sam-da-track-s
python tools/eval.py

Train SAM-DA

<a name="Performance"></a> ๐ŸŒˆ Fewer data, better performance

SAM-DA aims to reach the few-better training for quick deployment of night-time tracking methods for UAVs.

Training duration on a single A100 GPU.

License

The model is licensed under the Apache License 2.0 license.

Citations

Please consider citing the related paper(s) in your publications if it helps your research.

@article{Yao2023SAMDA,
  title={{SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation}},
  author={Yao, Liangliang and Zuo, Haobo and Zheng, Guangze and Fu, Changhong and Pan, Jia},
  journal={arXiv preprint arXiv:2307.01024},
  year={2023}
  pages={1-12}
}
@article{kirillov2023segment,
  title={{Segment Anything}},
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C and Lo, Wan-Yen and others},
  journal={arXiv preprint arXiv:2304.02643},
  year={2023}
  pages={1-30}
}
@Inproceedings{Ye2022CVPR,
title={{Unsupervised Domain Adaptation for Nighttime Aerial Tracking}},
author={Ye, Junjie and Fu, Changhong and Zheng, Guangze and Paudel, Danda Pani and Chen, Guang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022},
pages={1-10}
}

Acknowledgments

We sincerely thank the contribution of following repos: SAM, SiamBAN, and UDAT.

Contact

If you have any questions, please contact Liangliang Yao at 1951018@tongji.edu.cn or Changhong Fu at changhongfu@tongji.edu.cn.