Home

Awesome

MATTER: MATerial and TExture Representation Learning

This is an official implementation for "Self-Supervised Material and Texture Representation Learning for Remote Sensing Tasks" (CVPR 2022, oral).

By Peri Akiva, Matthew Purri, and Matt Leotta

Getting Started

teaser

Additional details will be added soon.

Environment Setup

Self-Supervised Training

Dataset

Option 1: Download dataset from Google Drive:

Download from here (216.1 GB)

Option 2: Use python script to download dataset:
  1. Setup path for download_region.py file.
    export PEO_DOWNLOAD_DIR=/path/to/save/peo/data/
    export PEO_REGION_DOWNLOAD_SCRIPT_PATH=/<root>/tools/download_region.py

  2. Run download PEO bash script.
    bash ./tools/download_dataset.sh

Main Results on Onera Change Detection

Onera Qualitative Figure - from paper

MethodSup.Precision (%)Recall (%)F-1 (%)
U-Net (random)F70.5319.1729.44
U-Net (ImageNet)F70.4225.1236.20
MoCo-v2S + F64.4930.9440.71
SeCoS + F65.4738.0646.94
DeepLab-v3 (ImageNet)F51.6351.0653.04
Ours (fine-tuned)S + F61.8057.1359.37
VCAS9.9220.7713.43
MoCo-v2S29.2111.9216.93
SeCoS74.7015.2025.26
OursS37.5272.6549.48

Precision, recall, and F-1 (%) accuracies (higher is better) of the ”change” class on Onera Satellite Change Detection (OSCD) dataset validation set. F, and S represent full and self-supervision respectively. S + F refer to self-supervised pretraining followed by fully supervised fine-tuning. Random and ImageNet denote the type of backbone weight initialization that method uses.

Citing MATTER

@InProceedings{Akiva_2022_CVPR,
    author    = {Akiva, Peri and Purri, Matthew and Leotta, Matthew},
    title     = {Self-Supervised Material and Texture Representation Learning for Remote Sensing Tasks},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {8203-8215}
}