Awesome
Transformer Meets Convolution: A Bilateral Awareness Network for Semantic Segmentation of Very Fine Resolution Urban Scene Images
In this repository, we implement the Bilateral Awareness Network which contains a dependency path and a texture path to fully capture the long-range relationships and fine-grained details in very fine resolution (VFR) urban scene images .
The detailed results can be seen in the Transformer Meets Convolution: A Bilateral Awareness Network for Semantic Segmentation of Very Fine Resolution Urban Scene Images.
The training and testing code can refer to GeoSeg.
The related repositories include:
- MACU-Net->A modified version of U-Net.
- MAResU-Net->A ResNet-based network with attention mechanism.
- Multi-Attention-Network->A network with multi kernel attention mechanism.
If our code is helpful to you, please cite:
Wang, L.; Li, R.; Wang, D.; Duan, C.; Wang, T.; Meng, X. Transformer Meets Convolution: A Bilateral Awareness Network for Semantic Segmentation of Very Fine Resolution Urban Scene Images. Remote Sensing. 2021, 13, 3065. https://doi.org/10.3390/rs13163065
Requirements:
numpy >= 1.16.5
PyTorch >= 1.3.1
sklearn >= 0.20.4
tqdm >= 4.46.1
imageio >= 2.8.0
timm >= 0.4.5
Network:
Fig. 1. The overall architecture of BANet.
Result:
The result on the UAVid dataset can seen from here, where the user name is AlexWang and the results can be downloaded by this link:
Method | building | tree | clutter | road | vegetation | static car | moving car | human | mIoU |
---|---|---|---|---|---|---|---|---|---|
MSD | 79.8 | 74.5 | 57.0 | 74.0 | 55.9 | 32.1 | 62.9 | 19.7 | 57.0 |
Fast-SCNN | 75.7 | 71.5 | 44.2 | 61.6 | 43.4 | 19.5 | 51.6 | 0.0 | 45.9 |
BiSeNet | 85.7 | 78.3 | 64.7 | 61.1 | 77.3 | 63.4 | 48.6 | 17.5 | 61.5 |
SwiftNet | 85.3 | 78.2 | 64.1 | 61.5 | 76.4 | 62.1 | 51.1 | 15.7 | 61.1 |
ShelfNet | 76.9 | 73.2 | 44.1 | 61.4 | 43.4 | 21.0 | 52.6 | 3.6 | 47.0 |
BANet | 85.4 | 78.9 | 66.6 | 80.7 | 62.1 | 52.8 | 69.3 | 21.0 | 64.6 |
Fig. 2. The experimental results on the UAVid validation set. The first column illustrates the input RGB images, the second column depicts the ground reference and the third column shows the predictions of our BANet.
Fig. 3. The experimental results on the UAVid test set. The first column illustrates the input RGB images, the second column depicts the outputs of MSD and the third column shows the predictions of our BANet.