Home

Awesome

FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with Benchmarks Using Supervised and Self-supervised Learning

Description

In recent decades, wildfires, as widespread and extremely destructive natural disasters, have caused tremendous property losses and fatalities, as well as extensive damage to forest ecosystems. Many fire risk assessment projects have been proposed to prevent wildfires, but GIS-based methods frequently suffer from inflexible data features and limited generalizability. Inspired by the abundance of publicly available remote sensing projects and the burgeoning development of deep learning in computer vision, our research focuses on assessing fire risk using remote sensing imagery.

In this work, we propose a novel remote sensing dataset, FireRisk, consisting of 7 fire risk classes with a total of 91,872 labelled images for fire risk assessment. This remote sensing dataset is labelled with the fire risk classes supplied by the Wildfire Hazard Potential (WHP) raster dataset, and remote sensing images are collected using the National Agriculture Imagery Program (NAIP), a high-resolution remote sensing imagery program.

FireRisk overview image

This figure shows sample images of all 7 labels in our FireRisk dataset. The images measure 270 × 270 pixels, with a total of 91,872 image.

Contributions

  1. We propose FireRisk, a remote sensing dataset for fire risk assessment, and offer a novel method for constructing a mapping between 7 fire risk classes and remote sensing images.
  2. To investigate the performance of supervised and self-supervised learning on our FireRisk, we employ ResNet, ViT, DINO, and MAE as benchmark models. With the use of transfer learning, we obtain the results of these models pre-trained on ImageNet and then fine-tuned on our FireRisk.
  3. Using the performance of our benchmarks on 20% and 50% of the training data from the original FireRisk, we illustrate the efficiency of data labelling in FireRisk as well as the sensitivity of various benchmark models to the amount of labelled data.y sampling 20% and 50% from the training set.
  4. We gather an unlabelled dataset, UnlabelledNAIP, from the NAIP remote sensing project and utilize it to pre-train novel latent representations of DINO and MAE. The results of fine-tuning on FireRisk using these two representations demonstrate the potential of different self-supervised benchmarks for enhancement in fire risk assessment.

Experiments

DatasetModelpre-trainedAccuracyF1-scorePrecisionRecall
FireRiskResNet-50ImageNet1k63.2052.5652.7553.41
FireRiskViT-B/16ImageNet1k63.3152.1853.9151.15
FireRiskDINOImageNet1k63.3652.6054.9551.27
FireRiskDINOUnlabelledNAIP63.4452.3753.7951.75
FireRiskMAEImageNet1k65.2955.4956.4255.36
FireRiskMAEUnlabelledNAIP63.5452.0454.0951.78
50% FireRiskResNet-50ImageNet1k62.0950.2751.0750.41
50% FireRiskViT-B/16ImageNet1k62.2250.0752.2050.15
50% FireRiskDINOImageNet1k61.7551.2151.3551.63
50% FireRiskDINOUnlabelledNAIP62.4951.3552.0851.48
50% FireRiskMAEImageNet1k63.7050.2352.8551.94
50% FireRiskMAEUnlabelledNAIP62.6852.0552.6351.59
20% FireRiskResNet-50ImageNet1k61.3749.5350.2850.12
20% FireRiskViT-B/16ImageNet1k61.4348.8050.8948.53
20% FireRiskDINOImageNet1k60.9550.7250.9951.28
20% FireRiskDINOUnlabelledNAIP61.9650.8353.0350.62
20% FireRiskMAEImageNet1k62.5151.1352.4650.87
20% FireRiskMAEUnlabelledNAIP61.8050.0751.6949.11

From the table we can draw the following conclusions:

  1. The maximum accuracy for the supervised benchmarks can reach 63.31%, while for the self-supervised benchmarks, the MAE pre-trained on ImageNet1k can achieve the optimal accuracy of all models at 65.29%. And the checkpoint of the optimal model is MAE pre-trained on ImageNet1k

  2. Our self-supervised learning benchmarks outperform supervised learning on FireRisk, although their improvement on less training data is limited.

  3. Our new pre-trained latent representations have a considerable increase in DINO, which can reach 63.44% compared to 63.36% for the DINO pre-trained on ImageNet.

Download

Paper

FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with Benchmarks Using Supervised and Self-supervised Learning

Dataset

FireRisk

Image naming in our FireRisk: $(pointid)_(grid_code)_(x_coord)_(y_coord).png$

NameData TypeMeaning
FIDintegerID of the data point in the file
pointidintegerunique ID of the data point in the WHP dataset
grid_codeinteger(from 1 to 7)code for fire risk level
class_descstring(seven values)description of the fire risk level, corresponding to the grid_code, which are 1:Very Low, 2:Low, 3:Moderate, 4:High, 5:Very High, 6:Non-burable and 7:water
x_coordnumberlongitude coordinates of the grid centroid
y_coordnumberlatitude coordinates of the grid centroid
Pre-trained Checkpoints
<table><tbody> <!-- START TABLE --> <!-- TABLE HEADER --> <th valign="bottom"></th> <th valign="bottom">DINO</th> <th valign="bottom">MAE</th> <!-- TABLE BODY --> <tr><td align="left">pre-trained checkpoint</td> <td align="center"><a href="https://drive.google.com/file/d/1iuaBpPZ3p_6dNplO60rzkJVkz2xfcmcH/view?usp=sharing">download</a></td> <td align="center"><a href="https://drive.google.com/file/d/1p73kNHSya9mnCXsp_DP8ZVU7JcfB9Kpi/view?usp=sharing">download</a></td> </tr> </tbody></table>

Citation

If you have used our FireRisk dataset, please cite the following papers: https://arxiv.org/abs/2303.07035

@misc{shen2023firerisk,
      title={FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with Benchmarks Using Supervised and Self-supervised Learning}, 
      author={Shuchang Shen and Sachith Seneviratne and Xinye Wanyan and Michael Kirley},
      year={2023},
      eprint={2303.07035},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200.