Home

Awesome

<link href="./docs/style.css" rel="stylesheet"/>

About the generation of synthetic laparascopic images using diffusion-based models

View research on Github Page

<div class="row"></div>

This repository is the code base used for our research. Please follow the guide:

Clone Repository

git clone https://github.com/SimeonAllmendinger/SyntheticImageGeneration.git
cd SyntheticImageGeneration

Virtual Environment

To set up a virtual environment, follow these steps:

  1. Create a virtual environment with python version 3.9:
virtualenv venv -p $(which python3.9)
  1. Activate the virtual environment:
source venv/bin/activate
  1. Install the required packages:
pip install --no-cache-dir -r requirements.txt

Download Model Weights

To test the generation of laparoscopic images with the Elucidated Imagen model, please do the following:

cd src/assets/
gdown --folder https://drive.google.com/drive/folders/1np4BON_jbQ1-15nVdgMCP1VKSbKS3h2M
gdown --folder https://drive.google.com/drive/folders/1BNdUmmqN18K4_lH0BMk0bwRkiy8Sv6D-
gdown --folder https://drive.google.com/drive/folders/1Y0yQmP3THRzP8UFlAyMFHYUymTUu7ZUu
cd ../../

Testing

To test the generation of laparoscopic images with the pre-trained Elucidated Imagen model, please do the following:

python3 src/components/test.py --model=ElucidatedImagen --text='grasper grasp gallbladder in callot triangle dissection' --cond_scale=3

You can apply the Imagen and Elucidated Imagen model, various conditiong scales and a suitable text prompt according to your desire! Feel free to try everything out. (The sampling of the Elucidated Imagen model also works well on a machine without GPU).

The hyperparameter configurations of the diffusion-based models are contained in the config file respectively (Model Config Folder). Their weights can be found in the table:

ModelTraining DatasetLink
Dall-e2 PriorCholecT45Dalle2_Prior_T45
Dall-e2 DecoderCholecT45Dalle2_Decoder_T45
ImagenCholecT45Imagen_T45
ImagenCholecT45 + CholecSeg8kImagen_T45_Seg8k
Elucidated ImagenCholecT45ElucidatedImagen_T45
Elucidated ImagenCholecT45 + CholecSeg8kElucidatedImagen_T45_Seg8k

Results

Before running the code for training, tuning and extensive testing purposes, please create a directory to store the results:

mkdir results
cd results
mkdir rendevouz
mkdir testing
mkdir training
mkdir TSNE
mkdir tuning

Data

git LFS

Install git LFS with homebrew: https://brew.sh/index_de

brew install git-lfs
git lfs install
git lfs track "*.pt"
git add .gitattributes

Download

To download the required datasets (CholecT45, CholecSeg8k, CholecT50, Cholec80), follow these steps:

  1. Create a directory to store the data:
cd
cd SyntheticImageGeneration
mkdir data
cd data
  1. Download the datasets into this directory after successful registration:

Preparation

To enable dashboards please copy your configs of neptune.ai and wandb.ai in the according .yaml file:

cd
cd SyntheticImageGeneration/configs/visualization/
touch config_neptune.yaml
touch config_wandb.yaml
  1. Neptune.ai (https://neptune.ai): Insert your acceess configs in the file config_neptune.yaml
project: "your-project-name" 
api_token: "your-api-token"
  1. Weights&Biases: Insert your access configs in the file config_neptune.yaml
project: "your-project-name" 
api_key: "your-api-key"

To prepare the data for the experiments, run the following script:

cd SyntheticImageGeneration
./scripts/run_data_preparation.sh

Now, you are prepared to explore the code base in full extense!

Rendezvouz (GitHub)

In the following, we provide trained rendezvous model weights from the 3-fold cross-validation for various proportions of generated samples:

Model%2 samples%5 samples%10 samples%20 samples%25 samples
I5-RDVWeightsWeightsWeightsWeightsWeights
EI5-RDVWeightsWeightsWeightsWeightsWeights

Acknowledgements

We acknowledge support by the state of Baden-Württemberg through bwHPC.