Home

Awesome

Efficient Depth Fusion Transformer for Aerial Image Semantic Segmentation

Abstract

Taking depth into consideration has been proven to improve the performance of semantic segmentation through providing additional geometry information. Most existing works adopt a two-stream network, extracting features from color images and depth images separately using two branches of the same structure, which suffer from high memory and computation costs. We find that depth features acquired by simple downsampling can also play a complementary part in the semantic segmentation task, sometimes even better than the two-stream scheme with the same two branches. In this paper, a novel and efficient depth fusion transformer network for aerial image segmentation is proposed. The presented network utilizes patch merging to downsample depth input and a depth-aware self-attention (DSA) module is designed to mitigate the gap caused by difference between two branches and two modalities. Concretely, the DSA fuses depth features and color features by computing depth similarity and impact on self-attention map calculated by color feature. Extensive experiments on the ISPRS 2D semantic segmentation dataset validate the efficiency and effectiveness of our method. With nearly half the parameters of traditional two-stream scheme, our method acquires 83.82% mIoU on Vaihingen dataset outperforming other state-of-the-art methods and 87.43% mIoU on Potsdam dataset comparable to the state-of-the-art.

<div align="center"> <img src="resources/EDFT.png"/> </div>

Paper can be download here.

Installation

Please refer to get_started.md for installation

Data

Two ISPRS Contest Datasets have been preprocessed to form RGB-D images and organized as a custom of mmsegmentation. Please download from aistudio: Vaihingen, Potsdam

Results

DataSetBackboneCrop SizeLr schdmIoUmIoU(ms+flip)configdownload
VaihingenSegformer-B0256x2568000080.4981.63configmodel
VaihingenSegformer-B1256x2568000081.2882.13configmodel
VaihingenSegformer-B2256x2568000082.1782.88configmodel
VaihingenSegformer-B3256x2568000082.2783.04configmodel
VaihingenSegformer-B4256x2568000083.0283.82configmodel
VaihingenSegformer-B5256x2568000082.4883.23configmodel
PotsdamSegformer-B4512x5128000087.2287.40configmodel

password for BaiduNetdisk: dshs

# Single-gpu testing
python tools\test.py configs\edft\segformer_mit_fuse-b0_256x256_80k_vai.py mit_fuse_b0.pth --eval mIoU

Training

# Single-gpu training
python tools\train.py configs\edft\segformer_mit_fuse-b0_256x256_80k_vai.py

Citation

@Article{rs14051294,
	AUTHOR = {Yan, Li and Huang, Jianming and Xie, Hong and Wei, Pengcheng and Gao, Zhao},
	TITLE = {Efficient Depth Fusion Transformer for Aerial Image Semantic Segmentation},
	JOURNAL = {Remote Sensing},
	VOLUME = {14},
	YEAR = {2022},
	NUMBER = {5},
	ARTICLE-NUMBER = {1294},
	URL = {https://www.mdpi.com/2072-4292/14/5/1294},
	ISSN = {2072-4292},
	DOI = {10.3390/rs14051294}
}