Home

Awesome

OmniSat: Self-Supervised Modality Fusion for Earth Observation (ECCV 2024)

python pytorch lightning hydra license

Official implementation for <br> OmniSat: Self-Supervised Modality Fusion for Earth Observation <br>

Description

Abstract

We introduce OmniSat, a novel architecture that exploits the spatial alignment between multiple EO modalities to learn expressive multimodal representations without labels. We demonstrate the advantages of combining modalities of different natures across three downstream tasks (forestry, land cover classification, and crop mapping), and propose two augmented datasets with new modalities: PASTIS-HD and TreeSatAI-TS.

<p align="center"> <img src="https://github.com/gastruc/OmniSat/assets/1902679/9fc20951-1cac-4891-b67f-53ed5e0675ad" width="500" height="250"> </p>

Datasets

Dataset nameModalitiesLabelsLink
PASTIS-HDSPOT 6-7 (1m) + S1/S2 (30-140 / year)Crop mapping (0.2m)huggingface or zenodo
TreeSatAI-TSAerial (0.2m) + S1/S2 (10-70 / year)Forestry (60m)huggingface
FLAIRaerial (0.2m) + S2 (20-114 / year)Land cover (0.2m)huggingface
<p align="center"> <img src="https://github.com/user-attachments/assets/18acbb19-6c90-4c9a-be05-0af24ded2052" width="500" height="250"> </p>

Results

We perform experiments with 100% and 10-20% of labels. See below, the F1 Score results on 100% of the training data with all modalities available:

F1 Score All ModalitiesUT&TScale-MAEDOFAOmniSat (no pretraining)OmniSat (with pretraining)
PASTIS-HD53.542.255.759.169.9
TreeSatAI-TS56.760.471.373.374.2
FLAIR48.870.074.970.073.4

OmniSat also improves performance even when only one modality is available for inference. F1 Score results on 100% of the training data with only S2 data available:

F1 Score S2 onlyUT&TScale-MAEDOFAOmniSat (no pretraining)OmniSat (with pretraining)
PASTIS-HD61.346.153.460.170.8
TreeSatAI-TS57.031.539.449.762.9
FLAIR62.061.061.065.465.4

Efficiency

We report the best performance of different models between TreeSatAI and TreeSatAI-TS, with pre-training and fine-tuning using 100% of labels. The area of the markers is proportional to the training time, broken down in pre-training and fine-tuning when applicable

<p align="center"> <img src="https://github.com/user-attachments/assets/0e6a378a-024a-4224-ad1d-fa7171df5adf" width="550" height="250"> </p>

Project Structure

The directory structure of new project looks like this:

├── configs                   <- Hydra configs
│   ├── callbacks                <- Callbacks configs
│   ├── dataset                  <- Data configs
│   ├── debug                    <- Debugging configs
│   ├── exp                      <- Experiment configs
│   ├── extras                   <- Extra utilities configs
│   ├── hparams_search           <- Hyperparameter search configs
│   ├── hydra                    <- Hydra configs
│   ├── local                    <- Local configs
│   ├── logger                   <- Logger configs
│   ├── model                    <- Model configs
│   ├── paths                    <- Project paths configs
│   ├── trainer                  <- Trainer configs
│   │
│   ├── config.yaml            <- Main config for training
│   └── eval.yaml              <- Main config for evaluation
│
├── data                   <- Project data
│
├── logs                   <- Logs generated by hydra and lightning loggers
│
├── src                    <- Source code
│   ├── data                     <- Data scripts
│   ├── models                   <- Model scripts
│   ├── utils                    <- Utility scripts
│   │
│   ├── eval.py                  <- Run evaluation
│   ├── train_pastis_20.py       <- Run training on 20% pastis dataset
│   └── train.py                 <- Run training
│
├── .env.example              <- Example of file for storing private environment variables
├── .gitignore                <- List of files ignored by git
├── .project-root             <- File for inferring the position of project root directory
├── environment.yaml          <- File for installing conda environment
├── Makefile                  <- Makefile with commands like `make train` or `make test`
├── pyproject.toml            <- Configuration options for testing and linting
├── requirements.txt          <- File for installing python dependencies
├── setup.py                  <- File for installing project as a package
└── README.md

Getting the data

🚀  Quickstart

# clone project
git clone https://github.com/ashleve/lightning-hydra-template
cd lightning-hydra-template

# [OPTIONAL] create conda environment
conda create -n omni python=3.9
conda activate omni

# install pytorch according to instructions
# https://pytorch.org/get-started/

# install requirements
pip install -r requirements.txt

# Create data folder where you can put your datasets
mkdir data
# Create logs folder
mkdir logs

Usage

Every experience of the paper has its own config. Feel free to explore configs/exp folder

python src/train.py exp=TSAITS_OmniSAT #to run OmniSAT pretraining on TreeSatAI-TS
#trainer.devices=X to change the number of GPU you want to train on
#trainer.num_workers=16 to change the num_workers available
#dataset.global_batch_size=16 to change global batch size (ie batch size that will be distributed across all GPUS)
#offline=True to run in offline mode from wandb
#max_epochs=1 to change the number maximum of epochs

python src/train.py exp=TSAITS_OmniSAT #to run OmniSAT finetuning on TreeSatAI-TS
#model.name=OmniSAT_MM  to change model name for logging
#partition=1.0 to change the percentage on training data you want to use

# All these parameters and more can be changed from the config file

To run 20% experiments on PASTIS-HD, you have to run

python src/train_pastis_20.py exp=Pastis_ResNet #to run a ResNet on PASTIS-HD
#partition parameter does not change anything on PASTIS-HD

Citation

To refer to this work, please cite

@article{astruc2024omnisat,
  title={Omni{S}at: {S}elf-Supervised Modality Fusion for {E}arth Observation},
  author={Astruc, Guillaume and Gonthier, Nicolas and Mallet, Clement and Landrieu, Loic},
  journal={ECCV},
  year={2024}
}

Acknowledgements

<br>