Awesome
<img src="/img/UAM_approaching.gif" width="800" alt="" />Official code for our work on UAM object tracking:
- [IROS 2022] Siamese Object Tracking for Vision-Based UAM Approaching with Pairwise Scale-Channel Attention
- [TII 2022] Scale-Aware Siamese Object Tracking for Vision-Based UAM Approaching
:bust_in_silhouette: Guangze Zheng, Changhong Fu*, Junjie Ye, Bowen Li, Geng Lu, and Jia Pan
1. Introduction
SiamSA aims to provide a model-free solution for UAM tracking during approaching the object (for manipulation). Since the Scale Variation issue has been more crucial than general object-tracking scenes, the novel scale awareness is proposed with powerful attention methods.
Please refer to our project page, papers, dataset, and videos for more details.
:newspaper:[Project page] :page_facing_up:[TII Paper] :page_facing_up:[IROS Paper] :books:[UAM Tracking Dataset] :movie_camera: [TII Demo] :movie_camera: [IROS Presentation]
2. UAMT100&UAMT20L benchmark
2.1 Introduction
-
With 100 image sequences, UAMT100 is a benchmark to evaluate object tracking methods for UAM approaching, while UAMT20L contains 20 long sequences. All sequences are recorded on a flying UAM platform;
-
16 kinds of objects are involved;
-
12 attributes are annotated for each sequence:
- aspect ratio change (ARC);background clutter (BC), fast motion (FM), low illumination (LI), object blur (OB), out-of-view (OV), partial occlusion (POC), similar object (SOB), scale variation (SV), UAM attack (UAM-A), viewpoint change (VC), and wind disturbance (WD).
2.2 Scale variation difference between UAV and UAM tracking
<img src="/img/SV.png" width="540" alt="" />A larger area under the curve means a higher frequency of object SV. It is clear that SV of UAM tracking is much more common and severe than UAV tracking.
2.3 Download and evaluation
- Please download the dataset from our project page.
- You can directly download our evaluation results (.mat) of SOTA trackers on the UAMT benchmark from GoogleDrive or BaiduYun.
3. Get started!
3.1 Environmental Setup
This code has been tested on Ubuntu 18.04, Python 3.8.3, Pytorch 1.6.0, CUDA 10.2. Please install related libraries before running this code:
git clone https://github.com/vision4robotics/SiamSA
pip install -r requirements.txt
3.2 Test
- For testing SiamSA_IROS22:
- Download SiamSA_IROS22 model from GoogleDrive or BaiduYun and put it into
snapshot
directory.
- Download SiamSA_IROS22 model from GoogleDrive or BaiduYun and put it into
- For testing SiamSA_TII22:
- Download SiamSA_TII22 model from GoogleDrive or BaiduYun and put it into
snapshot
directory.
- Download SiamSA_TII22 model from GoogleDrive or BaiduYun and put it into
- Download testing datasets (UAMT100/UAMT20L/UAV123@10fps) and put them into
test_dataset
directory. If you want to test the tracker on a new dataset, please refer to pysot-toolkit to set test_dataset.
python tools/test.py \
--trackername SiamSA \ # tracker_name
--dataset UAMT100 \ # dataset_name
--snapshot snapshot/model.pth # model_path
The testing result will be saved in the results/dataset_name/tracker_name
directory.
3.3 Evaluate
If you want to evaluate the tracker mentioned above, please put those results into results
directory.
python eval.py \
--tracker_path ./results \ # result path
--dataset UAMT100 \ # dataset_name
--tracker_prefix 'model' # tracker_name
3.4 Train
-
Download pretrained backbone from GoogleDrive or BaiduYun and put it into
pretrained_models
directory. -
Prepare training datasets
Download the datasets:
Note:
train_dataset/dataset_name/readme.md
has listed detailed operations about how to generate training datasets. -
Train a model
To train the SiamSA model, run
train.py
with the desired configs:python tools/train.py
-
Test and evaluate
Once you get a model, you may want to test and evaluate its performance by following the above 3.2 and 3.3 instructions
4. Cite SiamSA and UAM tracking benchmark
If you find SiamSA and UAM tracking useful, please cite our work by using the following BibTeX entry:
@inproceedings{SiamSA2022IROS,
title={{Siamese Object Tracking for Vision-Based UAM Approaching with Pairwise Scale-Channel Attention}},
author={Zheng, Guangze and Fu, Changhong and Ye, Junjie and Li, Bowen and Lu, Geng and Pan, Jia},
booktitle={Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={10486-10492},
year={2022}
}
@article {SiamSA2023TII,
title ={{Scale-Aware Siamese Object Tracking for Vision-Based UAM Approaching}},
journal = {IEEE Transactions on Industrial Informatics},
year = {2023},
author = {Zheng, Guangze and Fu, Changhong and Ye, Junjie and Li, Bowen and Lu, Geng and Pan, Jia},
pages = {1-12}
}
Contact
If you have any questions, don't hesitate to get in touch with me.
Guangze Zheng
Email: mmlp@tongji.edu.cn
Homepage: Guangze Zheng (george-zhuang.github.io)
Acknowledgement
- The code is implemented based on pysot, SiamAPN, and SiamSE. We want to express our sincere thanks to the contributors.
- We want to thank Ziang Cao for his advice on the code.
- We appreciate the help from Fuling Lin, Haobo Zuo, and Liangliang Yao.
- We want to thank Kunhan Lu for his advice on TensorRT acceleration.