Home

Awesome

A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery

This repository is the official implementation of A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery by Aatif Jiwani, Shubhrakanti Ganguly, Chao Ding, Nan Zhou, and David Chan.

model visualization

Requirements

  1. To install GDAL/georaster, please follow this doc for instructions.
  2. Install other dependencies from requirements.txt
pip install -r requirements.txt

Datasets

Downloading the Datasets

  1. To download the AICrowd dataset, please go here. You will have to either create an account or sign in to access the training and validation set. Please store the training/validation set inside <root>/AICrowd/<train | val> for ease of conversion.
  2. To download the Urban3D dataset, please run:
aws s3 cp --recursive s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/01-Provisional_Train/ <root>/Urban3D/train
aws s3 cp --recursive s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/02-Provisional_Test/ <root>/Urban3D/test
  1. To download the SpaceNet Vegas dataset, please run:
aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_2_Vegas.tar.gz <root>/SpaceNet/Vegas/
aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/AOI_2_Vegas_Test_public.tar.gz <root>/SpaceNet/Vegas/

tar xvf <root>/SpaceNet/Vegas/SN2_buildings_train_AOI_2_Vegas.tar.gz
tar xvf <root>/SpaceNet/Vegas/AOI_2_Vegas_Test_public.tar.gz

Converting the Datasets

Please use our provided dataset converters to process the datasets. For all converters, please look at the individual files for an example of how to use them.

  1. For AICrowd, use datasets/converters/cocoAnnotationToMask.py.
  2. For Urban3D, use datasets/converters/urban3dDataConverter.py.
  3. For SpaceNet, use datasets/converters/spaceNetDataConverter.py

Creating the Boundary Weight Maps

In order to train with the exponentially weighted boundary loss, you will need to create the weight maps as a pre-processing step. Please use datasets/converters/weighted_boundary_processor.py and follow the example usage. The inc parameter is specified for computational reasons. Please decrease this value if you notice very high memory usage.

Note: these maps are not required for evaluation / testing.

Training and Evaluation

To train / evaluate the DeepLabV3+ models described in the paper, please use train_deeplab.sh or test_deeplab.sh for your convenience. We employ the following primary command-line arguments:

ParameterDefaultDescription (final argument)
--backboneresnetThe DeeplabV3+ backbone (final method used drn_c42)
--out-stride16The backbone compression facter (8)
--dataseturban3dThe dataset to train / evaluate on (other choices: spaceNet, crowdAI, combined)
--data-root/data/Please replace this with the root folder of the dataset samples
--workers2Number of workers for dataset retrieval
--loss-typece_diceType of objective function. Use wce_dice for exponentially weighted boundary loss
--fbeta1The beta value to use with the F-Beta Measure (0.5)
--dropout0.1 0.5Dropout values to use in the DeepLabV3+ (0.3 0.5)
--epochsNoneNumber of epochs to train (60 for train, 1 for test)
--batch-sizeNoneBatch size (3/4)
--test-batch-sizeNoneTesting Batch Size (1/4)
--lr1e-4Learning Rate (1e-3)
--weight-decay5e-4L2 Regularization Constant (1e-4)
--gpu-ids0GPU Ids (Use --no-cuda for only CPU)
--checknameNoneExperiment name
--use-wandbFalseTrack experiment using WandB
--resumeNoneExperiment name to load weights from (i.e. urban for weights/urban/checkpoint.pth.tar)
--evalulateFalseEnable this flag for testing
--best-miouFalseEnable this flag to get best results when testing
--incl-boundsFalseEnable this flag when training with wce_dice as a loss

To train with the cross-task training strategy, you need to:

  1. Train a model using --dataset=combined until the best loss has been achieved
  2. Train a model using --resume=<checkname> on one of the three primary datasets until the best mIoU is achieved

Pre-Trained Weights

We provide pre-trained model weights in the weights/ directory. Please use Git LFS to download these weights. These weights correspond to our best model on all three datasets.

Results

Our final model is a DeepLavV3+ module with a Dilated ResNet C42 backbone trained using the F-Beta Measure + Exponentially Weighted Cross Entropy Loss (Beta = 0.5). We employ the cross-task training strategy only for Urban3D and SpaceNet.

Our model achieves the following:

DatasetAvg. PrecisionAvg. RecallF1 ScoremIoU
Urban3D83.8%82.2%82.4%83.3%
SpaceNet91.4%91.8%91.6%90.2%
AICrowd96.2%96.3%96.3%95.4%

Acknowledgements

We would like to thank jfzhang95 for his DeepLabV3+ model and training template. You can access this repository here