Home

Awesome

<div align="center"> <h1 align="center">ChangeMamba</h1> <h3>ChangeMamba: Remote Sensing Change Detection with Spatio-Temporal State Space Model</h3>

Hongruixuan Chen<sup>1 #</sup>, Jian Song<sup>1,2 #</sup>, Chengxi Han<sup>3</sup>, Junshi Xia<sup>2</sup>, Naoto Yokoya<sup>1,2 *</sup>

<sup>1</sup> The University of Tokyo, <sup>2</sup> RIKEN AIP, <sup>3</sup> Wuhan University.

<sup>#</sup> Equal contribution, <sup>*</sup> Corresponding author

TGRS paper arXiv paper Zenodo Models visitors

Overview | Get Started | Taken Away | Common Issues | Others | 简体中文版

PWC PWC PWC PWC PWC

</div>

🛎️Updates

🔭Overview

<p align="center"> <img src="figures/network_architecture.png" alt="accuracy" width="90%"> </p> <p align="center"> <img src="figures/STLM.png" alt="arch" width="60%"> </p>

🗝️Let's Get Started!

A. Installation

Note that the code in this repo runs under Linux system. We have not tested whether it works under other OS.

The repo is based on the VMama repo, thus you need to install it first. The following installation sequence is taken from the VMamba repo.

Step 1: Clone the repository:

Clone this repository and navigate to the project directory:

git clone https://github.com/ChenHongruixuan/MambaCD.git
cd MambaCD

Step 2: Environment Setup:

It is recommended to set up a conda environment and installing dependencies via pip. Use the following commands to set up your environment:

Create and activate a new conda environment

conda create -n changemamba
conda activate changemamba

Install dependencies

pip install -r requirements.txt
cd kernels/selective_scan && pip install .

Dependencies for "Detection" and "Segmentation" (optional in VMamba)

pip install mmengine==0.10.1 mmcv==2.1.0 opencv-python-headless ftfy regex
pip install mmdet==3.3.0 mmsegmentation==1.2.2 mmpretrain==1.2.0

B. Download Pretrained Weight

Also, please download the pretrained weights of VMamba-Tiny, VMamba-Small, and VMamba-Base and put them under

project_path/MambaCD/pretrained_weight/

C. Data Preparation

Binary change detection

The three datasets SYSU, LEVIR-CD+ and WHU-CD are used for binary change detection experiments. Please download them and make them have the following folder/file structure:

${DATASET_ROOT}   # Dataset root directory, for example: /home/username/data/SYSU
├── train
│   ├── T1
│   │   ├──00001.png
│   │   ├──00002.png
│   │   ├──00003.png
│   │   ...
│   │
│   ├── T2
│   │   ├──00001.png
│   │   ... 
│   │
│   └── GT
│       ├──00001.png 
│       ...   
│   
├── test
│   ├── ...
│   ...
│  
├── train.txt   # Data name list, recording all the names of training data
└── test.txt    # Data name list, recording all the names of testing data

Semantic change detection

The SECOND dataset is used for semantic change detection experiments. Please download it and make it have the following folder/file structure. Note that the land-cover maps are RGB images in the original SECOND dataset for visualization, you need to transform them into single-channel. Also, the binary change maps should be generated by yourself and put them into folder [GT_CD].

Or you are welcome to directly download and use our preprocessed SECOND dataset.

${DATASET_ROOT}   # Dataset root directory, for example: /home/username/data/SECOND
├── train
│   ├── T1
│   │   ├──00001.png
│   │   ├──00002.png
│   │   ├──00003.png
│   │   ...
│   │
│   ├── T2
│   │   ├──00001.png
│   │   ... 
│   │
│   ├── GT_CD   # Binary change map
│   │   ├──00001.png 
│   │   ... 
│   │
│   ├── GT_T1   # Land-cover map of T1
│   │   ├──00001.png 
│   │   ...  
│   │
│   └── GT_T2   # Land-cover map of T2
│       ├──00001.png 
│       ...  
│   
├── test
│   ├── ...
│   ...
│ 
├── train.txt
└── test.txt

Building damage assessment

The xBD dataset can be downloaded from xView 2 Challenge website. After downloading it, please organize it into the following structure:

${DATASET_ROOT}   # Dataset root directory, for example: /home/username/data/xBD
├── train
│   ├── images
│   │   ├──guatemala-volcano_00000000_pre_disaster.png
│   │   ├──guatemala-volcano_00000000_post_disaster.png
│   │   ...
│   │
│   └── targets
│       ├──guatemala-volcano_00000003_pre_disaster_target.png
│       ├──guatemala-volcano_00000003_post_disaster_target.png
│       ... 
│   
├── test
│   ├── ...
│   ...
│
├── holdout
│   ├── ...
│   ...
│
├── train.txt # Data name list, recording all the names of training data
├── test.txt  # Data name list, recording all the names of testing data
└── holdout.txt  # Data name list, recording all the names of holdout data

D. Model Training

Before training models, please enter into [changedetection] folder, which contains all the code for network definitions, training and testing.

cd <project_path>/MambaCD/changedetection

Binary change detection

The following commands show how to train and evaluate MambaBCD-Small on the SYSU dataset:

python script/train_MambaBCD.py  --dataset 'SYSU' \
                                 --batch_size 16 \
                                 --crop_size 256 \
                                 --max_iters 320000 \
                                 --model_type MambaBCD_Small \
                                 --model_param_path '<project_path>/MambaCD/changedetection/saved_models' \ 
                                 --train_dataset_path '<dataset_path>/SYSU/train' \
                                 --train_data_list_path '<dataset_path>/SYSU/train_list.txt' \
                                 --test_dataset_path '<dataset_path>/SYSU/test' \
                                 --test_data_list_path '<dataset_path>/SYSU/test_list.txt'
                                 --cfg '<project_path>/MambaCD/changedetection/configs/vssm1/vssm_small_224.yaml' \
                                 --pretrained_weight_path '<project_path>/MambaCD/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth'

Semantic change detection

The following commands show how to train and evaluate MambaSCD-Small on the SECOND dataset:

python script/train_MambaSCD.py  --dataset 'SECOND' \
                                 --batch_size 16 \
                                 --crop_size 256 \
                                 --max_iters 800000 \
                                 --model_type MambaSCD_Small \
                                 --model_param_path '<project_path>/MambaCD/changedetection/saved_models' \ 
                                 --train_dataset_path '<dataset_path>/SECOND/train' \
                                 --train_data_list_path '<dataset_path>/SECOND/train_list.txt' \
                                 --test_dataset_path '<dataset_path>/SECOND/test' \
                                 --test_data_list_path '<dataset_path>/SECOND/test_list.txt'
                                 --cfg '<project_path>/MambaCD/changedetection/configs/vssm1/vssm_small_224.yaml' \
                                 --pretrained_weight_path '<project_path>/MambaCD/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth'

Building Damge Assessment

The following commands show how to train and evaluate MambaBDA-Small on the xBD dataset:

python script/train_MambaBDA.py  --dataset 'xBD' \
                                 --batch_size 16 \
                                 --crop_size 256 \
                                 --max_iters 800000 \
                                 --model_type MambaBDA_Small \
                                 --model_param_path '<project_path>/MambaCD/changedetection/saved_models' \ 
                                 --train_dataset_path '<dataset_path>/xBD/train' \
                                 --train_data_list_path '<dataset_path>/xBD/train_list.txt' \
                                 --test_dataset_path '<dataset_path>/xBD/test' \
                                 --test_data_list_path '<dataset_path>/xBD/test_list.txt'
                                 --cfg '<project_path>/MambaCD/changedetection/configs/vssm1/vssm_small_224.yaml' \
                                 --pretrained_weight_path '<project_path>/MambaCD/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth'

E. Inference Using Our/Your Weights

Before inference, please enter into [changedetection] folder.

cd <project_path>/MambaCD/changedetection

Binary change detection

The following commands show how to infer binary change maps using trained MambaBCD-Tiny on the LEVIR-CD+ dataset:

python script/infer_MambaBCD.py  --dataset 'LEVIR-CD+' \
                                 --model_type 'MambaBCD_Tiny' \
                                 --test_dataset_path '<dataset_path>/LEVIR-CD+/test' \
                                 --test_data_list_path '<dataset_path>/LEVIR-CD+/test_list.txt' \
                                 --cfg '<project_path>/MambaCD/changedetection/configs/vssm1/vssm_tiny_224_0229flex.yaml' \
                                 --resume '<saved_model_path>/MambaBCD_Tiny_LEVIRCD+_F1_0.8803.pth'

Semantic change detection

The following commands show how to infer semantic change maps using trained MambaSCD-Tiny on the SECOND dataset:

python script/infer_MambaBCD.py  --dataset 'SECOND'  \
                                 --model_type 'MambaSCD_Tiny' \
                                 --test_dataset_path '<dataset_path>/SECOND/test' \
                                 --test_data_list_path '<dataset_path>/SECOND/test_list.txt' \
                                 --cfg '<project_path>/MambaCD/changedetection/configs/vssm1/vssm_tiny_224_0229flex.yaml' \
                                 --resume '<saved_model_path>/[your_trained_model].pth'

Building damage assessment

The following commands show how to infer building damage assessment results using trained MambaBDA-Tiny on the xBD dataset:

python script/infer_MambaBDA.py  --dataset 'SECOND'  \
                                 --model_type 'MambaBDA_Tiny' \
                                 --test_dataset_path '<dataset_path>/xBD/test' \
                                 --test_data_list_path '<dataset_path>/xBD/test_list.txt' \
                                 --cfg '<project_path>/MambaCD/changedetection/configs/vssm1/vssm_tiny_224_0229flex.yaml' \
                                 --resume '<saved_model_path>/[your_trained_model].pth'

⚗️Results Taken Away

A. Pretrained Weight of VMamba (Encoder)

MethodImageNet (ckpt)
VMamba-Tiny[Zenodo][GDrive][BaiduYun]
VMamba-Small[Zenodo][GDrive][BaiduYun]
VMamba-Base[Zenodo][GDrive][BaiduYun]

B. Binary Change Detection

MethodSYSU (ckpt)LEVIR-CD+ (ckpt)WHU-CD (ckpt)
MambaBCD-Tiny[Zenodo][GDrive][BaiduYun][Zenodo][GDrive][BaiduYun][Zenodo][GDrive][BaiduYun]
MambaBCD-Small[Zenodo][GDrive][BaiduYun][Zenodo][GDrive][BaiduYun][Zenodo][GDrive][BaiduYun]
MambaBCD-Base[Zenodo][GDrive][BaiduYun][Zenodo][GDrive] [BaiduYun][Zenodo][GDrive][BaiduYun]

C. Semantic Change Detection

MethodSECOND (ckpt)
MambaSCD-Tiny[Zenodo][GDrive][BaiduYun]
MambaSCD-Small--
MambaSCD-Base[Zenodo][GDrive][BaiduYun]

D. Building Damage Assessment

MethodxBD (ckpt)
MambaBDA-Tiny--
MambaBDA-Small--
MambaBDA-Base--

🤔Common Issues

Based on peers' questions from issue, here's a quick navigate list of solutions to some common issues.

IssueSolution
Issues about SECOND datasetPlease refer to Issue #13 / #22 / #45
CUDA out of memory issuePlease lower the batch size of training and evalution
Modify the model structurePlease refere to Issue #44
NameError: name 'selective_scan_cuda_oflex' is not definedPlease refer to Issue #9
Question about the relationship between iteration, epoch & batch sizePlease refere to Issue #32 / #48
Inference using trained models has low accuracyPlease use --resume instead of --pretrained_weight_path to load the trained model's weight

📜Reference

If this code or dataset contributes to your research, please kindly consider citing our paper and give this repo ⭐️ :)

@article{chen2024changemamba,
  author={Hongruixuan Chen and Jian Song and Chengxi Han and Junshi Xia and Naoto Yokoya},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={ChangeMamba: Remote Sensing Change Detection with Spatiotemporal State Space Model}, 
  year={2024},
  volume={62},
  number={},
  pages={1-20},
  doi={10.1109/TGRS.2024.3417253}
}

🤝Acknowledgments

This project is based on VMamba (paper, code), ScanNet (paper, code), BDANet (paper, code). Thanks for their excellent works!!

🙋Q & A

For any questions, please feel free to contact us.

Star History Chart