Home

Awesome

RIT-18

High-Resolution Multispectral Dataset for Semantic Segmentation

<p align="center"> <img src="https://www.mathworks.com/help/examples/images_deeplearning/win64/SemanticSegmentationOfMultispectralImagesExample_01.png"> </p>

Description

This repository contains the RIT-18 dataset we built for the semantic segmentation of remote sensing imagery. It was collected with the Tetracam Micro-MCA6 multispectral imaging sensor flown on-board a DJI-1000 octocopter. The main contributions of this dataset include 1) very-high resolution multispectral imagery from a drone, 2) six-spectral VNIR bands, and 3) 18 object classes (plus background) with a severely unbalanced class distribution. Details about its construction can be found in our paper.

If you use this dataset in a publication, please cite:

@article{kemker2018algorithms,
title = "Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
year = "2018",
issn = "0924-2716",
doi = "https://doi.org/10.1016/j.isprsjprs.2018.04.014",
url = "http://www.sciencedirect.com/science/article/pii/S0924271618301229",
author = "Ronald Kemker and Carl Salvaggio and Christopher Kanan",
}

Data Files

This repository contains the following files:

  1. rit18_data_url: The URL to the current location of the data.
  2. evaluate_rit18.py: The evaluation script used to score the predicition map
  3. read_rit18.py: This script opens all of the data in the dataset.

The data, once downloaded, is ~3.0GB (1.58 GB compressed). It is a .mat file containing a dictionary of various elements including:

Instructions

The dataset contain pixel-wise annotations for both the training and validation folds. Both sets of labels can be used to train a classifier. It is separated as a rough per-class split, but the validation fold does not contain the black and white wooden targets. This is because we want to evaluate our model's ability to perform low-shot learning.

The goal is to have the test labels available on the IEEE GRSS evaluation server. Until then, you can e-mail me your test predictions using the following format:

I will use your predicitions on the evaluate_rit18.py script that I provided here and send you the output file. I will not score the area outside of the mask, but the background pixels ("class 0") will be scored. As soon as I get this up on the evaluation server, then the user will be able to do all of this themselves.

MATLAB Tutorial

Our dataset was recently featured in a MATLAB Deep Learning Tutorial called Semantic Segmentation of Multispectral Images Using Deep Learning.

Points of Contact

Also Check Out