Awesome
ArSDM
This is the official implementation of ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic Diffusion Models at MICCAI-2023.
<p align="center"> <img src=assets/framework.png /> </p>Table of Contents
- Requirements
- Dataset Preparation
- Sampling with ArSDMs
- Training Your Own ArSDMs
- Downstream Evaluation
- Acknowledgement
- Citations
Requirements
To get started, ensure you have the following dependencies installed:
conda create -n ArSDM python=3.8
conda activate ArSDM
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt
Dataset Preparation
You can download the dataset from this repository.
Please organize the dataset with the following structure:
├── ${data_root}
│ ├── ${train_data_dir}
│ │ ├── images
│ │ │ ├── ***.png
│ │ ├── masks
│ │ │ ├── ***.png
│ ├── ${test_data_dir}
│ │ ├── images
│ │ │ ├── ***.png
│ │ ├── masks
│ │ │ ├── ***.png
Sampling with ArSDMs
Model Zoo
We provide pre-trained models for various configurations:
Ada. Loss | Refinement | Saved Epoch | Batch Size | GPU | Link |
---|---|---|---|---|---|
x | x | 94 | 8 | 2 A100 (80GB) | OneDrive |
✔ | x | 100 | 8 | 1 A100 (80GB) | OneDrive |
x | ✔ | 2 | 8 | 1 A100 (80GB) | OneDrive |
✔ | ✔ | 3 | 8 | 1 A100 (80GB) | OneDrive |
Download the pre-trained weights above or follow the next section to train your own models.
Specify the CKPT_PATH
and RESULT_DIR
in the sample.py
file and run the following command:
python sample.py
Illustrations of generated samples with the corresponding masks and original images for comparison reference are shown below:
<p align="center"> <img src=assets/samples.png /> </p>Train Your Own ArSDMs
To train your own ArSDMs, follow these steps:
- Specify the
train_data_dir
andtest_data_dir
in the correspondingArSDM_xxx.yaml
file in theconfigs
folder. - Specify the
CONFIG_FILE_PATH
in themain.py
file. - Run the following command:
python main.py
If you intend to train models with refinement
, ensure that you have trained or downloaded diffusion model weights and the PraNet weights. Specify the ckpt_path
and pranet_path
in the ArSDM_xxx.yaml
config file.
For example, if you want to train models with adaptive loss
and refinement
(ArSDM), first train a diffusion model with adaptive loss
only using ArSDM_adaptive.yaml
. Then, specify the trained weights path with ckpt_path
and use ArSDM_our.yaml
to train the final model.
Please note that all experiments were conducted using NVIDIA A100 (80GB) with a batch size of 8. If you have GPUs with lower memory, please reduce the batch_size
in the config files accordingly.
Downstream Evaluation
To perform downstream evaluation, follow the steps in the Sampling with ArSDMs section to sample image-mask pairs and create a new training dataset for downstream polyp segmentation and detection tasks. For training these tasks, refer to the official repositories:
Polyp Segmentation:
Polyp Detection:
Acknowledgement
This project is built upon the foundations of several open-source codebases, including LDM, guided-diffusion and SDM. We extend our gratitude to the authors of these codebases for their invaluable contributions to the research community.
Citations
If you find ArSDM useful for your research, please consider citing our paper:
@inproceedings{du2023arsdm,
title={ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic Diffusion Models},
author={Du, Yuhao and Jiang, Yuncheng and Tan, Shuangyi and Wu, Xusheng and Dou, Qi and Li, Zhen and Li, Guanbin and Wan, Xiang},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={339--349},
year={2023},
organization={Springer}
}