Home

Awesome

Introduction

Bing Maps is releasing open building footprints around the world. We have detected 1.4B buildings from Bing Maps imagery between 2014 and 2024 including Maxar, Airbus, and IGN France imagery. The data is freely available for download and use under ODbL. This dataset includes our other releases.

Updates

sample footprints

Regions included

building regions

You can download the layer above as GeoJSON here.

Buildings with height coverage

building heights

You can download the layer above as GeoJSON here.

License

This data is licensed by Microsoft under the Open Data Commons Open Database License (ODbL).

FAQ

What does the data include?

999M building footprint polygon geometries located around the world in line delimited GeoJSON format. Due to the way we process the data, file extensions are .csv.gz see make-gis-friendly.py for an example of how to decompress and change file extension.

As of October 2022, we moved the location table to dataset-links.csv since it's over 19k records with country-quadkey partitioning.

What is the GeoJSON format?

GeoJSON is a format for encoding a variety of geographic data structures. For intensive documentation and tutorials, refer to this blog.

Why is the data being released?

Microsoft has a continued interest in supporting a thriving OpenStreetMap ecosystem.

Should we import the data into OpenStreetMap?

Maybe. Never overwrite the hard work of other contributors or blindly import data into OSM without first checking the local quality. While our metrics show that this data meets or exceeds the quality of hand-drawn building footprints, the data does vary in quality from place to place, between rural and urban, mountains and plains, and so on. Inspect quality locally and discuss an import plan with the community. Always follow the OSM import community guidelines.

Will the data be used or made available in the larger OpenStreetMap ecosystem?

Yes. The HOT Tasking Manager has integrated Facebook Rapid where the data has been made available.

How did we create the data?

The building extraction is done in two stages:

  1. Semantic Segmentation – Recognizing building pixels on an aerial image using deep neural networks (DNNs)
  2. Polygonization – Converting building pixel detections into polygons

Stage1: Semantic Segmentation

segmenation diagram

Stage 2: Polygonization

polygonization diagram

How do we estimate building height?

We trained a neural network to estimate height above ground using imagery paired with height measurements, and then we take the average height within a building polygon. Structures without height estimates are populated with a -1. Height estimates are in meters.

Building confidence scores

Confidence scores are between 0 and 1 and can be read as percent confidence. For structures released before this update, we use -1 as a placeholder value. A confidence value of 0.8 is read as "80% confidence." Higher values mean higher detection confidence. There are two stages in the building detection process -- first we use a model to classify pixels as either building or not and next we convert groups of pixels into polygons. Each pixel has a probability of being a building and a probability >0.5 is classified as "building pixel". When we generate the polygons, we then look at the pixels within and average the probability values to give and overall confidence score. The confidence scores are for the footprint and not height estimate.

Were there any modeling improvements used for this release?

We did not apply any modeling improvements for this release. Instead, we focused on scaling our approach to increase coverage, and trained models regionally.

Evaluation set metrics

The evaluation metrics are computed on a set of building polygon labels for each region. Note, we only have verification results for Mexico buildings since we did not train a model for the country.

Building match metrics on the evaluation set:

RegionPrecisionRecall
Africa94.4%70.9%
Caribbean92.2%76.8%
Central Asia97.17%79.47%
Europe94.3%85.9%
Middle East95.7%85.4%
South America95.4%78.0%
South Asia94.8%76.7%

We track the following metrics to measure the quality of matched building polygons in the evaluation set:

  1. Intersection over Union – This is a standard metric measuring the overlap quality against the labels
  2. Dominant angle rotation error – This measures the polygon rotation deviation
RegionIoURotation error [deg]
Africa64.5%5.67
Caribbean64.0%6.64
Central Asia68.2%6.91
Europe65.1%10.28
Middle East65.1%9.3
South America66.7%6.34
South Asia63.1%6.25

False positive ratio in the corpus

False positives are estimated per country from randomly sampled building polygon predictions.

RegionBuildings SampledFalse Positive RateRun Date
Africa5,0001.1%Early 2022
Caribbean3,0001.8%Early 2022
Central Asia3,0002.2%Early 2022
Europe5,0001.4%Early 2022
Mexico2,0000.1%Early 2022
Middle East7,0001.8%Early 2022
South America5,0001.7%Early 2022
South Asia7,0001.4%Early 2022
North America4,0001%Oct 2022
Europe Maxar5,0001.4%July 2022

What is the vintage of this data?

Vintage of extracted building footprints depends on the vintage of the underlying imagery. The underlying imagery is from Bing Maps including Maxar and Airbus between 2014 and 2021.

How good is the data?

Our metrics show that in the vast majority of cases the quality is at least as good as hand digitized buildings in OpenStreetMap. It is not perfect, particularly in dense urban areas but it provides good recall in rural areas.

What is the coordinate reference system?

EPSG: 4326

Will there be more data coming for other geographies?

Maybe. This is a work in progress. Also, check out our other building releases!

Why are some locations missing?

We excluded imagery from processing if tiles were dated before 2014 or there was a low-probability of detection. Detection probability is loosely defined here as proximity to roads and population centers. This filtering and tile exclusion results in squares of missing data.

How can I read large files?

Some files are very large but they are stored in line-delimited format so one could use parallel processing tools (e.g., Spark, Dask) or create a memory efficient script to segment into smaller pieces. See scripts/read-large-files.py for a Python example.

Need roads?

Check out our ML Road Detections project page!

<br>

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Legal Notices

Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found here.

Privacy information can be found here.

Microsoft and any contributors reserve all others rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel or otherwise.