Home

Awesome

DOI

This repository contains:


Note: If this data is used in any publication or presentation the following reference must be cited:

P. Gader, A. Zare, R. Close, J. Aitken, G. Tuell, “MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set,” University of Florida, Gainesville, FL, Tech. Rep. REP-2013-570, Oct. 2013.

If the scene labels are used in any publication or presentation, the following reference must be cited:

X. Du and A. Zare, “Technical Report: Scene Label Ground Truth Map for MUUFL Gulfport Data Set,” University of Florida, Gainesville, FL, Tech. Rep. 20170417, Apr. 2017. Available: http://ufdc.ufl.edu/IR00009711/00001.

If any of this scoring or detection code is used in any publication or presentation, the following reference must be cited:

T. Glenn, A. Zare, P. Gader, D. Dranishnikov. (2016). Bullwinkle: Scoring Code for Sub-pixel Targets (Version 1.0) [Software]. Available from https://github.com/GatorSense/MUUFLGulfport/.


This directory includes the data files for the MUUFL Gulfport Campus images, scoring and utility code, target detection algorithms, and a short demonstration script.

————- Included files:

————- About the Bullwinkle scoring:

Bullwinkle is a blobless (mostly) per-pixel scoring routine. It has many features, but the salient points are listed here. Due to the uncertainties in image registration and ground truth, not all target pixels can be explicitly labeled. Bullwinkle manages this uncertainty by counting the maximum value within a halo of the truth location as the target’s confidence. All pixels outside of the target halos are counted as individual false alarm opportunities. Target halos extend from the edge of the target, and thus target regions for the bigger targets are larger overall.

————- About demo.m:

This demo adds the needed directories to the Matlab path and then proceeds to demonstrate how to use some of the target detection algorithms and scoring utilities.

It first runs 2 detection algorithms which look only for pea green target. These outputs are then scored against only the 3m sized pea green targets. This is just done to demonstrate the target filtering steps.

Next multi-target versions of ACE and the Spectral Matched Filter are run to find all of the targets. ROC curves are then plotted for multiple algorithms simultaneously, and the demo shows a few options for the ROC plotter.

————- About the target ground truth:

The .mat and .csv files contain the target ground truth locations and information about the emplaced targets.

Here is an overview of the fields:

Targets_UTMx: UTM Easting Targets_UTMy: UTM Northing Targets_Lat: Degrees Latitude Targets_Lon: Degrees Longitude Targets_ID: numerical identifier for the target (1 to 64)

Targets_Type: cloth color, one of {faux vineyard green, pea green, dark green, brown, vineyard green} for the regular targets, or one of {red,black,blue,green} for the large calibration cloths

Targets_Elevated: one of {0,1} indicating if the target was on an elevated platform

Targets_Size: one of {0.5, 1, 3, 6} indicating target size. The 0.5, 1, and 3, are square targets of that dimension (ie 3m by 3m), the size 6 are the large 6m by 10m calibration cloths in the center of the campus

Targets_HumanConf: one of {1,2,3,4} indicating target visibility. That is, whether the human truthers felt they could identify the target in the data. Scale of: 1 visible, 2 probably visible, 3 possibly visible, 4 not visible

Targets_HumanCat: one of {0,1,2} indicating occlusion category. 0 unoccluded, 1 part or fully in shadow but no tree occlusion, 2 part or full occlusion by tree

id: string indicating revision number, date, and author of the truth file

————-

There are many more utility functions and detection routines included than are used in the demo. So, feel free to nose around and read the source… -Taylor Glenn – 10/25/2013