Awesome
<h2 align="center">LoveNAS: Towards Multi-Scene Land-Cover Mapping via Hierarchical Searching Adaptive Network</h2> <h5 align="right">by <a href="https://junjue-wang.github.io/homepage/">Junjue Wang</a>, <a href="http://rsidea.whu.edu.cn/">Yanfei Zhong</a>, <a href="http://zhuozheng.top/">Zhuo Zheng</a>, Yuting Wan, Ailong Ma and Liangpei Zhang</h5>This is an official implementation of LoveNAS.
<div align="center"> <img width="100%" src="https://github.com/Junjue-Wang/resources/blob/main/LoveNAS/framework.png?raw=true"> </div>Environments:
- pytorch >= 1.11.0
- python >=3.6
pip install --upgrade git+https://gitee.com/zhuozheng/ever_lite.git@v1.4.5
pip install git+https://github.com/qubvel/segmentation_models.pytorch
pip install mmcv-full==1.4.7 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11.0/index.html
The Swin-Transformer pretrained weights can be prepared following MMSegmentation.
Search architecture
bash ./scripts/nas_loveda.sh
Train model
The searched architectures and transferred encoder weights should be downloaded.
bash ./scripts/train_loveda.sh
Predict test results
The searched architectures and LoveNAS model weights should be downloaded. Submit the test results to LoveDA Semantic Segmentation Challenge to get scores.
bash ./scripts/submit_loveda.sh
LoveDA
The LoveDA dataset can be downloaded here.
Submit the test results to LoveDA Semantic Segmentation Challenge to get scores.
Search-Config | Backbone | Train-Config | Params (M) | mIoU(%) | Download |
---|---|---|---|---|---|
config | MobileNetV2 | config | 3.837 | 50.60 | log&ckpt (pwd:2333) |
config | ResNet-50 | config | 30.491 | 52.34 | log&ckpt (pwd:2333) |
config | EfficientNet-B3 | config | 14.190 | 52.05 | log&ckpt (pwd:2333) |
config | Swin-Base | config | 92.435 | 53.76 | log&ckpt (pwd:2333) |
FloodNet
The FloodNet dataset can be downloaded here.
The train data should be prepared using prepare_floodnet.py.
Search-Config | Backbone | Train-Config | Params (M) | mIoU(%) | Download |
---|---|---|---|---|---|
config | MobileNetV2 | config | 12.072 | 70.73 | log&ckpt (pwd:2333) |
config | ResNet-50 | config | 38.457 | 72.54 | log&ckpt (pwd:2333) |
config | EfficientNet-B3 | config | 18.851 | 72.69 | log&ckpt (pwd:2333) |
config | Swin-Base | config | 97.701 | 73.79 | log&ckpt (pwd:2333) |
Citation
If you use LoveNAS in your research, please cite the following papers.
@article{wang2024lovenas,
title={LoveNAS: Towards multi-scene land-cover mapping via hierarchical searching adaptive network},
author={Wang, Junjue and Zhong, Yanfei and Ma, Ailong and Zheng, Zhuo and Wan, Yuting and Zhang, Liangpei},
journal={ISPRS Journal of Photogrammetry and Remote Sensing},
volume={209},
pages={265--278},
year={2024},
publisher={Elsevier}
}
@inproceedings{wang2021loveda,
title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
booktitle={Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
editor = {J. Vanschoren and S. Yeung},
year={2021},
volume = {1},
pages = {},
url={https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/4e732ced3463d06de0ca9a15b6153677-Paper-round2.pdf}
}
@article{wang2020rsnet,
title={RSNet: The search for remote sensing deep neural networks in recognition tasks},
author={Wang, Junjue and Zhong, Yanfei and Zheng, Zhuo and Ma, Ailong and Zhang, Liangpei},
journal={IEEE Transactions on Geoscience and Remote Sensing},
volume={59},
number={3},
pages={2520--2534},
year={2020},
publisher={IEEE}
}