Home

Awesome

DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors for Change Detection

(Previosely: DDPM-CD: Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models)

Wele Gedara Chaminda Bandara, Nithin Gopalakrishnan Nair, Vishal M. Patel

Offical Pytorch implementation of DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors for Change Detection / Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models

Latest Version of the Paper

Updates:

1. Motivation & Contribution

image-20210228153142126

Images sampled from the DDPM model pre-trained on off-the-shelf remote sensing images.

2. Method

image-20210228153142126

We fine-tune a lightweight change classifier utilizing the feature representations produced by the pre-trained DDPM alongside change labels

3. Usage

3.1 Requirements

Before using this repository, make sure you have the following prerequisites installed:

You can install PyTorch with the following command (in Linux OS):

conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia

3.2 Installation

To get started, clone this repository:

git clone https://github.com/wgcban/ddpm-cd.git

Next, create the conda environment named ddpm-cd by executing the following command:

conda env create -f environment.yml

Then activate the environment:

conda activate ddpm-cd

Download the datasets and place them in the dataset folder. ->See Section 5.1 for download links.

If you wish to only test, download the pre-trained DDPM and fine-tuned DDPM-CD models and place them in the experiments folder. ->See Section 7 for links.

All the train-val-test statistics will be automatically upload to wandb, and please refer wandb-quick-start documentation if you are not familiar with using wandb.

4. Pre-training DDPM

4.1 Collect off-the-shelf remote sensing data to train diffusion model

Dump all the remote sensing data sampled from Google Earth Engine and any other publically available remote sensing images to dataset folder or create a simlink.

4.2 Pre-train/resume (unconditional) DDPM

We use ddpm_train.json to setup the configurations. Update the dataset name and dataroot in the json file. Then run the following command to start training the diffusion model. The results and log files will be save to experiments folder. Also, we upload all the metrics to wandb.

python ddpm_train.py --config config/ddpm_train.json -enable_wandb -log_eval

In case, if you want to resume the training from previosely saved point, provide the path to saved model in path/resume_state, else keep it as null.

4.3 Sampling from the pre-trained DDPM

If you want generate samples from the pre-trained DDPM, first update the path to trained diffusion model in [path][resume_state]. Then run the following command.

python ddpm_train.py --config config/ddpm_sampling.json --phase val

The generated images will be saved in experiments.

5. Fine-tuning for change detection

5.1 Download the change detection datasets

Download the change detection datasets from the following links. Place them inside your datasets folder.

Then, update the paths to those folders here [datasets][train][dataroot], [datasets][val][dataroot], [datasets][test][dataroot] in levir.json, whu.json, dsifn.json, and cdd.json.

5.2 Provide the path to pre-trained diffusion model

Udate the path to pre-trained diffusion model weights (*_gen.pth and *_opt.pth) here [path][resume_state] in levir.json, whu.json, dsifn.json, and cdd.json..

5.3 Indicate time-steps used for feature extraction

Indicate the time-steps using to extract feature representations in [model_cd][t]. As shown in the ablation section of the paper, our best model is obtained with time-steps: {50,100,400}. However, time-steps of {50,100} works well too.

5.4 Start fine-tuning for change detection

Run the following code to start the training.

The results will be saved in experiments and also upload to wandb.

6. Testing

To obtain the predictions and performance metrics (IoU, F1, and OA), first provide the path to pre-trained diffusion model here [path][resume_state] and path to trained change detection model (the best model) here [path_cd][resume_state] in levir_test.json, whu_test.json, dsifn_test.json, and cdd_test.json. Also make sure you specify the time steps used in fine-tuning here: [model_cd][t].

Run the following code to start the training.

Predictions will be saved in experiments and performance metrics will be uploaded to wandb.

7. Links to download pre-trained models

7.1 Pre-trianed DDPM

Pre-trained diffusion model can be download from: Dropbox

7.2 Fine-tuned DDPM-CD models

Fine-tunes chande detection networks can be download from following links:

7.2 Downloading from GoogleDrive/GitHub

If you face a problem when downloading from the DropBox try one of the following options:

7.3 Train/Val Reports on wandb

7.4 Test results on wandb

8. Results

8.1 Quantitative

image-20210228153142126

The average quantitative change detection results on the LEVIR-CD, WHU-CD, DSIFN-CD, and CDD test- sets. “-” indicates not reported or not available to us. (IN1k) indicates pre-training process is initialized with the ImageNet pre-trained weights. IN1k, IBSD, and GE refers to ImageNet1k, Inria Building Segmentation Dataset, and Google Earth.

8.2 Qualitative

9. Citation

@misc{bandara2024ddpmcdv2,
    title = {Remote Sensing Change Detection (Segmentation) using Denoising Diffusion Probabilistic Models},
    author = {Bandara, Wele Gedara Chaminda and Nair, Nithin Gopalakrishnan and Patel, Vishal M.},
    year = {2022},
    eprint={2206.11892},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    doi = {10.48550/ARXIV.2206.11892},
}
@misc{bandara2024ddpmcdv3,
    title={DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors for Change Detection}, 
    author={Wele Gedara Chaminda Bandara and Nithin Gopalakrishnan Nair and Vishal M. Patel},
    year={2024},
    eprint={2206.11892},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    doi = {10.48550/ARXIV.2206.11892},
}

10. References