Awesome
automated-building-detection
Automated Building Detection using Deep Learning: a NLRC/510 tool
Scope: quickly map a large area to support disaster response operations
Input: very-high-resolution (<= 0.5 m/pixel) RGB satellite images. Currently supported:
- Bing Maps
- Any custom image in raster format
Output: buildings in vector format (geojson), to be used in digital map products
Credits
Built on top of robosat and robosat.pink.
Development: Ondrej Zacha, Wessel de Jong, Jacopo Margutti
Contact: Jacopo Margutti.
Structure
abd_utils
utility functions to download/process satellite imagesabd_model
framework to train and run building detection models on imagesinput
input/configuration files needed to run the rest
Requirements:
To download satellite images:
To run the building detection models:
- GPU with VRAM >= 8 GB
- NVIDIA GPU Drivers and NVIDIA CUDA Toolkit
If using Docker
- NVIDIA Container Toolkit, to expose the GPUs to the docker container
Getting started
Using Docker
- Install Docker.
- Download the latest Docker Image
docker pull rodekruis/automated-building-detection
- Create a docker container and connect it to a local directory (
<path-to-your-workspace>
)
docker run --name automated-building-detection -dit -v <path-to-your-workspace>:/workdir --ipc=host --gpus all -p 5000:5000 rodekruis/automated-building-detection
- Access the container
docker exec -it automated-building-detection bash
Manual Setup
conda create --name abdenv python=3.7
conda activate abdenv
- From root directory, move to
abd_utils
and install
cd abd_utils
pip install .
- Move to
abd_model
and install
cd ../abd_model
pip install .
N.B. Remember to activate abdenv
next time
End-to-end example
How to use these tools? We take as example a small Dutch town; to predict the buildings in another area, simply change the input AOI (you can create your own using e.g. geojson.io).
Detailed explanation on usage and parameters of the different commands is given in the subdirectories abd_utils
and abd_model
.
- Add you Bing Maps Key in
abd_utils/src/abd_utils/.env
(the Docker container has vim pre-installed) - Download the images of the AOI, divided in tiles
download-images --aoi input/AOI.geojson --output bing-images
- Convert the images into the format needed to run the building detection model
images-to-abd --images bing-images/images --output abd-input
- Download a pre-trained model (more details below) and add it to the
input
directory - Run the building detection model
abd predict --config input/config.toml --dataset abd-input --cover abd-input/cover.csv --checkpoint input/neat-fullxview-epoch75.pth --out abd-predictions --metatiles --keep_borders
- Vectorize model output (from pixels to polygons)
abd vectorize --config input/config.toml --type Building --masks abd-predictions --out abd-predictions/buildings.geojson
- Merge touching polygons, remove small artifacts, simplify geometry
filter-buildings --data abd-predictions/buildings.geojson --dest abd-predictions/buildings-clean.geojson
Model collection
- neat-fullxview-epoch75:
- architecture: AlbuNet (U-Net-like encoder-decoder with a ResNet, ResNext or WideResNet encoder)
- training: xBD dataset, 75 epochs
- performance: IoU 0.79, MCC 0.75