Home

Awesome

MangroveAI

MangroveAI is a deep learning-based approach for mangrove monitoring and conservation using satellite imagery. This repository contains the code and data for the paper "A Deep Learning-Based Approach for Mangrove Monitoring", accepted for presentation at the European Conference on Machine Learning (ECML), specifically at the Machine Learning for Earth Observation Workshop. This work aims to enhance mangrove segmentation accuracy by leveraging advanced deep learning models, including convolutional, transformer, and Mamba architectures.

For Reviewers

Here we release the training codes for the models for peer review. For the dataset, we are providing only a few samples due to storage constraints related to its size. The remaining samples will be made available to the public (open-source) after peer review on platforms appropriate for supporting the dataset size.

Overview

Mangroves are vital coastal ecosystems that play a crucial role in environmental health, economic stability, and climate resilience. This work focuses on developing and evaluating state-of-the-art deep learning models for accurate mangrove segmentation from multispectral satellite imagery. The key contributions of this work include:

Dataset MagSet-2

MagSet-2 is an open-source dataset we developed specifically for this work. It integrates mangrove annotations from the Global Mangrove Watch with multispectral satellite images from Sentinel-2 for the year 2020, resulting in more than 10,000 paired images and mangrove locations. The dataset encompasses images from various geographic zones, ensuring a diverse representation of mangrove ecosystems worldwide. This extensive dataset aims to facilitate researchers in training their models to utilize Sentinel-2 imagery for monitoring mangrove areas of environmental protection for years beyond 2020.

<p align="center"> <img src="images/data_locations.png" alt="Sample Locations on World Map " width="600px"/> </p> <div align="center"> Mangrove Position (neon blue) and the different Mangrove Zones (green) Dataset based on the Global Mangrove Watch (GMW) v3.2020. Each Neon Blue Point represents a position of a sample from the MagSet-2 dataset. </div> <br>

Our dataset includes various bands from the electromagnetic spectrum, obtained from Sentinel-2 imagery. It features RGB (Red, Green, Blue) bands, the Near-Infrared (NIR) band, Vegetation NIR band, and Short-Wave Infrared (SWIR) band. Additionally, it includes estimated vegetation indices such as NDVI, NDWI, and NDMI, alongside targeted mangrove locations. These diverse spectral bands and indices enhance predictive modeling, enabling precise and detailed mangrove segmentation for effective monitoring and conservation:

<p align="center"> <img src="images/sample_from_magset2.png" alt="MagSet-2 Dataset" width="700px"/> </p> <div align="center"> Sentinel-2 Spectral Display and Vegetation Analysis: Starting from the top left with the RGB bands, followed by the NIR band, Vegetation NIR, and SWIR band in sequence. On the bottom row, from left to right, we have the estimated NDVI, NDWI, NDMI indices, and the targeted Mangrove locations for predictive modeling. </div> <br>

Additional perspectives from the dataset are presented, showcasing a diverse array of views from various regions globally. These perspectives highlight the extensive geographical coverage and varied contexts of the dataset, offering a comprehensive representation of mangrove ecosystems across different continents and climatic zones. This diversity underscores the dataset's global relevance and the importance of addressing the unique environmental characteristics present in each region:

<p align="center"> <img src="images/mangrove_masks.png" alt="MagSet-2 Dataset (Other views)" width="500px"/> </p> <div align="center"> Samples from the MagSet-2 dataset are presented. On the right, the RGB bands are displayed, while on the left, the RGB bands along with the mangrove mask (highlighted in yellow) are shown. The other spectral bands for each sample are not displayed. </div> <br>

Models

The following deep learning models were evaluated:

Convolutional-based Architectures

Transformer-based Architectures

Mamba-based Architectures

These architectures are selected based on their prominence and proven efficacy in semantic segmentation tasks

Dataset Preprocessing & Training Pipeline

<p align="center"> <img src="images/papers_pipeline.png" alt="Pipeline" width="800px"/> </p> <div align="center"> A flowchart representing the steps of the satellite image processing pipeline for the prediction of mangrove locations. </div> <br>

The preprocessing starts with acquiring Sentinel-2 images from the Copernicus dataset, followed by zone definition, annotation, data filtering, augmentation, and normalization. This thorough preparation optimizes the images for the predictive models, enhancing both accuracy and efficiency.

Building on this, the training phase employs sigmoid activation functions and the AdamW optimizer, with a learning rate that halves after every seven stagnant epochs. Training follows the Binary Cross-Entropy loss function. All models maintain similar computational complexities to allow fair comparisons. These steps are executed using an NVIDIA Tesla V100-SXM2 32 GB GPU, producing segmentation maps where each pixel indicates mangrove likelihood. Hyperparameters used are summarized in the table below.

<div align="center">
ParameterValue
Batch Size32
Learning Rate for Convolutional Models0.0001
Learning Rate for Transformer Models0.0005
Learning Rate for the Mamba Model0.0005
Number of Epochs100
</div>

Results

We assessed the performance of each model using established image segmentation evaluation metrics, complemented by a qualitative analysis of the results. The key metrics included:

<div align="center">
Method# Parameters (M)IoU (%)Accuracy (%)F1-score (%)Loss
U-Net32.5461.7678.5976.320.47
PAN34.7964.4481.1678.320.41
MANet33.3871.7585.8083.510.34
BEiT33.5970.7885.6682.870.48
SegFormer34.6372.3286.1383.910.42
Swin-UMamba32.3572.8786.6484.270.31
</div>

The performance of the selected deep learning models for mangrove segmentation on Sentinel-2 satellite imagery is presented in the table below. The models are categorized into three architectural groups:

<p align="center"> <img src="images/evaluation.png" alt="Evaluation" width="800px"/> </p> <div align="center"> Comparative performance on Sentinel-2 test, using Training Set Loss (left), Test Set F1 Score (center), and Test Set Intersection over Union (IoU) (right). Each line represents a model: U-Net (neon blue), PAN (red), MANet (black), BEiT (green), SegFormer (yellow), and Swin-UMamba (dark blue) trained over 100 epochs. Lower loss values, higher F1 and IoU values indicate better performance. Swin-UMamba consistently shows superior performance over all metrics. </div> <br> <p align="center"> <img src="images/model_results.png" alt="Comparative visual segmentation results of mangrove areas" width="1000px"/> </p> <div align="center"> Comparative visual segmentation results of mangrove areas. The first column shows the original satellite images, the second column depicts the ground truth segmentation, and the subsequent columns display the segmentation results from U-Net, PAN, MANet, BEiT, SegFormer, and Swin-UMamba models. </div> <br>

License

This work is licensed under the MIT License. See the LICENSE file for details.