Home

Awesome

<div align="center"> <h2> Time Travelling Pixels: Bitemporal Features Integration with Foundation Model for Remote Sensing Image Change Detection </h2> </div> <br>

<br>

<div align="center"> <a href="https://kychen.me/TTP"> <span style="font-size: 20px; ">Project Page</span> </a> &nbsp;&nbsp;&nbsp;&nbsp; <a href="https://arxiv.org/abs/2312.16202"> <span style="font-size: 20px; ">arXiv</span> </a> &nbsp;&nbsp;&nbsp;&nbsp; <a href="https://huggingface.co/spaces/KyanChen/TTP"> <span style="font-size: 20px; ">HFSpace</span> </a> &nbsp;&nbsp;&nbsp;&nbsp; <a href="resources/ttp.pdf"> <span style="font-size: 20px; ">PDF</span> </a> </div> <br> <br>

GitHub stars license arXiv Hugging Face Spaces

<br> <br> <div align="center">

English | 简体中文

</div>

Introduction

The repository is the code implementation of the paper Time Travelling Pixels: Bitemporal Features Integration with Foundation Model for Remote Sensing Image Change Detection, based on MMSegmentation and Open-CD projects.

The current branch has been tested under PyTorch 2.x and CUDA 12.1, supports Python 3.7+, and is compatible with most CUDA versions.

If you find this project helpful, please give us a star ⭐️, your support is our greatest motivation.

<details open> <summary>Main Features</summary> </details>

Update Log

🌟 2023.12.23 Released the TTP project code, which is completely consistent with the API interface and usage of MMSegmentation.

🌟 2023.12.30 Released the model trained on Levir-CD.

🌟 2024.02.10 This project has been included in the Open-CD project.

Table of Contents

Installation

Dependencies

Environment Installation

We recommend using Miniconda for installation. The following command will create a virtual environment named ttp and install PyTorch and MMCV.

Note: If you have experience with PyTorch and have already installed it, you can skip to the next section. Otherwise, you can follow these steps to prepare.

<details>

Step 0: Install Miniconda.

Step 1: Create a virtual environment named ttp and activate it.

conda create -n ttp python=3.10 -y
conda activate ttp

Step 2: Install PyTorch2.1.x.

Linux/Windows:

pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121

Or

conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia

Step 3: Install MMCV2.1.x.

pip install -U openmim
mim install mmcv==2.1.0

Step 4: Install other dependencies.

pip install -U wandb einops importlib peft==0.8.2 scipy ftfy prettytable torchmetrics==1.3.1 transformers==4.38.1
</details>

Install TTP

Download or clone the TTP repository.

git clone git@github.com:KyanChen/TTP.git
cd TTP

Dataset Preparation

<details>

Levir-CD Change Detection Dataset

Dataset Download

Organization Method

You can also choose other sources to download the data, but you need to organize the dataset in the following format:

${DATASET_ROOT} # Dataset root directory, for example: /home/username/data/levir-cd
├── train
│   ├── A
│   ├── B
│   └── label
├── val
│   ├── A
│   ├── B
│   └── label
└── test
    ├── A
    ├── B
    └── label

Note: In the project folder, we provide a folder named data, which contains an example of the organization method of the above dataset.

Other Datasets

If you want to use other datasets, you can refer to MMSegmentation documentation to prepare the datasets.

</details>

Model Training

TTP Model

Config File and Main Parameter Parsing

We provide the configuration files of the TTP model used in the paper, which can be found in the configs/TTP folder. The Config file is completely consistent with the API interface and usage of MMSegmentation. Below we provide an analysis of some of the main parameters. If you want to know more about the meaning of the parameters, you can refer to MMSegmentation documentation.

<details>

Parameter Parsing:

</details>

Single Card Training

python tools/train.py configs/TTP/xxx.py  # xxx.py is the configuration file you want to use

Multi-card Training

sh ./tools/dist_train.sh configs/TTP/xxx.py ${GPU_NUM}  # xxx.py is the configuration file you want to use, GPU_NUM is the number of GPUs used

Other Instance Segmentation Models

<details>

If you want to use other change detection models, you can refer to Open-CD to train the models, or you can put their Config files into the configs folder of this project, and then train them according to the above method.

</details>

Model Testing

Single Card Testing:

python tools/test.py configs/TTP/xxx.py ${CHECKPOINT_FILE}  # xxx.py is the configuration file you want to use, CHECKPOINT_FILE is the checkpoint file you want to use

Multi-card Testing:

sh ./tools/dist_test.sh configs/TTP/xxx.py ${CHECKPOINT_FILE} ${GPU_NUM}  # xxx.py is the configuration file you want to use, CHECKPOINT_FILE is the checkpoint file you want to use, GPU_NUM is the number of GPUs used

Note: If you need to get the visualization results, you can uncomment default_hooks-visualization in the Config file.

Image Prediction

Single Image Prediction:

python demo/image_demo_with_cdinferencer.py ${IMAGE_FILE1} ${IMAGE_FILE2} configs/TTP/ttp_sam_large_levircd_infer.py --checkpoint ${CHECKPOINT_FILE} --out-dir ${OUTPUT_DIR}  # IMAGE_FILE is the image file you want to predict, xxx.py is the configuration file, CHECKPOINT_FILE is the checkpoint file you want to use, OUTPUT_DIR is the output path of the prediction result

FAQ

<details>

We have listed some common problems and their corresponding solutions here. If you find that some problems are missing, please feel free to provide a PR to enrich this list. If you cannot get help here, please use issue to seek help. Please fill in all the required information in the template, which will help us locate the problem faster.

1. Do I need to install MMSegmentation, MMPretrain, MMDet, Open-CD?

We recommend that you do not install them, because we have partially modified their code, which may cause errors in the code if you install them. If you get an error that the module has not been registered, please check:

2. About resource consumption

Here we list the resource consumption of using different training methods for your reference.

Model NameBackbone TypeImage SizeGPUBatch SizeAcceleration StrategySingle Card Memory UsageTraining Time
TTPViT-L/16512x5124x RTX 4090 24G2FP3214 GB3H
TTPViT-L/16512x5124x RTX 4090 24G2FP1612 GB2H

4. Solution to dist_train.sh: Bad substitution

If you get a Bad substitution error when running dist_train.sh, use bash dist_train.sh to run the script.

5. You should set PYTHONPATH to make sys.path include the directory which contains your custom module

Please check the detailed error message, generally some dependent packages are not installed, please use pip install to install the dependent packages.

</details>

Acknowledgements

The repository is the code implementation of the paper Time Travelling Pixels: Bitemporal Features Integration with Foundation Model for Remote Sensing Image Change Detection, based on MMSegmentation and Open-CD projects.

Citation

If you use the code or performance benchmarks of this project in your research, please refer to the following bibtex to cite TTP.

@misc{chen2023time,
      title={Time Travelling Pixels: Bitemporal Features Integration with Foundation Model for Remote Sensing Image Change Detection}, 
      author={Keyan Chen and Chengyang Liu and Wenyuan Li and Zili Liu and Hao Chen and Haotian Zhang and Zhengxia Zou and Zhenwei Shi},
      year={2023},
      eprint={2312.16202},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

License

The repository is licensed under the Apache 2.0 license.

Contact Us

If you have other questions❓, please contact us in time 👬