Awesome
[Pattern Recognition] Multiple-environment Self-adaptive Network for Aerial-view Geo-localization
<div align="center"> <img src="docs/dance.jpg" alt="Editor" width="700"> </div> <div align="center"> <img src="docs/visual.png" alt="Editor" width="800"> </div>MuseNet
Prerequisites
- Python 3.6
- GPU Memory >= 8G
- Numpy > 1.12.1
- Pytorch 0.3+
- scipy == 1.2.1
- imgaug == 0.4.0
Getting started
Dataset & Preparation
Download University-1652 upon request. You may use the request template.
<!--Download [SUES-200](https://github.com/Reza-Zhu/SUES-200-Benchmark).-->Download CVUSA.
Train & Evaluation
Train & Evaluation University-1652
sh run.sh
:sparkles:Download The Trained Model
Train & Evaluation CVUSA
python prepare_cvusa.py
sh run_cvusa.sh
Citation
@ARTICLE{wang2024Muse,
title={Multiple-environment Self-adaptive Network for Aerial-view Geo-localization},
author={Wang, Tingyu and Zheng, Zhedong and Sun, Yaoqi and Yan, Chenggang and Yang, Yi and Tat-Seng Chua},
journal = {Pattern Recognition},
volume = {152},
pages = {110363},
year = {2024},
doi = {https://doi.org/10.1016/j.patcog.2024.110363}}
@ARTICLE{wang2021LPN,
title={Each Part Matters: Local Patterns Facilitate Cross-View Geo-Localization},
author={Wang, Tingyu and Zheng, Zhedong and Yan, Chenggang and Zhang, Jiyong and Sun, Yaoqi and Zheng, Bolun and Yang, Yi},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2022},
volume={32},
number={2},
pages={867-879},
doi={10.1109/TCSVT.2021.3061265}}
@article{zheng2020university,
title={University-1652: A Multi-view Multi-source Benchmark for Drone-based Geo-localization},
author={Zheng, Zhedong and Wei, Yunchao and Yang, Yi},
journal={ACM Multimedia},
year={2020}
}