Awesome
New: Please check out SFDA, the repository of our Source-Free Domain Adaptation method.
Constrained Domain Adaptation
We introduce Constrained Domain Adaptation for Image Segmentation.
Mathilde Bateson, Hoel Kervadec, Jose Dolz, Hervé Lombaert, Ismail Ben Ayed @ETS Montréal
Please cite our paper if you find it useful for your research.
@ARTICLE{BatesonCDA,
author={Bateson, M. and Dolz, J. and Kervadec, H. and Lombaert, H. and Ayed, I. Ben},
journal={IEEE Transactions on Medical Imaging},
title={Constrained Domain Adaptation for Image Segmentation},
year={2021},
volume={40},
number={7},
pages={1875-1887}}
Example Results
Requirements
Non-exhaustive list:
- python3.6+
- Pytorch 1.0
- nibabel
- Scipy
- NumPy
- Matplotlib
- Scikit-image
- zsh
Data scheme
datasets
For instance
data
mr/
train/
IMG/
slice10_0.nii
...
GT/
slice10_0.nii
...
...
val/
IMG/
slice100_0.nii
...
GT/
slice100_0.nii
...
...
ct/
train/
IMG/
ctslice1_0.nii
...
GT/
ctslice1_0.nii
...
...
val/
IMG/
ctslice11_0.nii
...
GT/
ctslice11_0.nii
...
...
The network takes png or nii files as an input. The gt folder contains gray-scale images of the ground-truth, where the gray-scale level is the number of the class (0,1,...K).
Class-ratio (sizes) prior
The class-ratio prior is estimated for each slice in the target domain training and validation sets. It estimated once, before the start of the adaptation phase, and saved in a csv file. In our implementation, is it estimated through an auxiliary network, but is can as easily be estimated from anatomical knowledge. We provide these simple estimations in the sizes
folder.
Scheme
sizes/
whs.csv
ivd.csv
The size csv file should be organized as follows:
val_ids | dumbpredwtags |
---|---|
ctslice00_0.nii | [Estimated_Size_class0, Estimated_Size_class1, ..., Estimated_Size_classk] |
Sample from sizes/whs.csv :
val_ids | val_gt_size | dumbpredwtags |
---|---|---|
ctslice00_0.nii | [147398.0, 827.0] | [140225, 6905] |
ctslice00_1.nii | [147080.0, 1145.0] | [140225, 6905] |
ctslice00_14.nii | [148225.0, 0.0] | [148225, 0] |
NB 1 : there should be no overlap between names of the slices in the training and validation sets (Case00_0.nii,...).
NB 2: in our implementation, the csv file contains the sizes priors in pixels, and the Quadratic Loss divides the size in pixels by (w*h) the height and weight of the slice, to obtain the class-ratio prior.
NB 3: Estimated_Size_class0 + Estimated_Size_class1 + ... + Estimated_Size_classk = w*h
NB 4: the true val_gt_size is unknown, so it is not directly used in our proposed CDA. However, in our framework an image-level annotation is available for the target training dataset: the "Tag" of each class k, indicating the presence or absence of class k in the slice. Therefore, Estimated_Size_classk=0 if val_gt_size_k = 0 and Estimated_Size_classk>0 if val_gt_size_k > 0
NB 5: To have an idea of the capacity of the SFDA model in the ideal case where the ground truth class-ratio prior is known, it is useful to run the upper bound model CDA_TrueSize choosing the column "val_gt_size" instead of "dumbpredwtags". This can be changed in the makefile :
results/whs/CDA_TrueSize: OPT = --target_losses="[('EntKLProp', {'inv_consloss':True,'lamb_se':1,'lamb_consprior':1,'ivd':True,'weights_se':[0.1,0.9],'idc_c': [1],'curi':True,'power': 1},'PredictionBounds', \
{'margin':0,'dir':'high','idc':[0,1],'predcol':'val_gt_size','power': 1, 'mode':'percentage','sizefile':'sizes/whs.csv'},'norm_soft_size',1)]" \
--val_target_folders="$(TT_DATA)" --l_rate 0.000001 --n_epoch 100 --lr_decay 0.9 --batch_size 10 --target_folders="$(TT_DATA)" --model_weights="$(M_WEIGHTS_ul)" \
NB 6 : If you change the name of the columns (val_ids, dumbpredwtags) in the size file, you should change them in the bounds.py
file as well as in the ivd.make
makefile.
results
results/
whs/
fs/
best_epoch_3d/
val/
ctslice11_0.png
...
iter000/
val/
...
cda/
...
params.txt # saves all the argparse parameters of the model
best_3d.pkl # best model saved
last.pkl # last epoch
IMG_target_metrics.csv # metrics over time, csv
3dbestepoch.txt # number and 3D Dice of the best epoch
...
whs/
...
archives/
$(REPO)-$(DATE)-$(HASH)-$(HOSTNAME)-cda.tar.gz
$(REPO)-$(DATE)-$(HASH)-$(HOSTNAME)-fs.tar.gz
Interesting bits
The losses are defined in the losses.py
file.
Running our main experiment
Once you have downladed the data and organized it such as in the scheme above, run the main experiment as follows:
make -f whs.make
This will first run the source training model, which will be saves in results/cesource, and then the CDA model, which will be saved in results/cda.
Cool tricks
Remove all assertions from the code to speed up. Usually done after making sure it does not crash for one complete epoch:
make -f whs.make <anything really> CFLAGS=-O
Use a specific python executable:
make -f whs.make <super target> CC=/path/to/the/executable
Train for only 5 epochs, with a dummy network, and only 10 images per data loader. Useful for debugging:
make -f whs.make <anything really> NET=Dimwit EPC=5 DEBUG=--debug
Rebuild everything even if already exist:
make -f whs.make <a> -B
Only print the commands that will be run (useful to check recipes are properly defined):
make -f whs.make <a> -n
Related Implementation and Dataset
- Mathilde Bateson, Hoel Kervadec, Jose Dolz, Hervé Lombaert, Ismail Ben Ayed. Source-Relaxed Domain Adaptation for Image Segmentation. In MICCAI 2020. [paper] [implementation]
- Hoel Kervadec, Jose Dolz, Meng Tang, Eric Granger, Yuri Boykov, Ismail Ben Ayed. Constrained-CNN losses for weakly supervised segmentation. In Medical Image Analysis, 2019. [paper] [code]
- Heart Dataset and details: We used the preprocessed dataset from Dou et al. : https://github.com/carrenD/Medical-Cross-Modality-Domain-Adaptation. The data is in tfs records, it should be transformed to nii or png before running the makefile. We used a randomized sequence of augmentation steps (contrast shifts, flips) as a data augmentation strategy in the source domain. We did not use any augmentation for the target domain.
- Spine Dataset and details: https://ivdm3seg.weebly.com/ . From the original coronal view, we transposed the slices to transverse view in our experiments. We set the water modality (Wat) as the source and the in-phase (IP) modality as the target domain. From this dataset, 13 scans are used for training, and the remaining 3 scans for validation.
Download the data and put in the data/sagittal
folder, then rotate and save into the data/ivd_transverse
folder, both for the Wat and the IP modality:
python rotate.py --base_folder='./data/sagittal/IP/' --folders=['train','val'] --save_folder='./data/ivd_transverse/IP/' --rot=’rot’ --grp_regex="Subj_\\d+_"
- New: Prostate Dataset and details: https://raw.githubusercontent.com/liuquande/SAML/. The SA site dataset was used a target domain, the SB site was used as source domain. For both datasets, we use 20 scans for training, and the remaining 10 scans for validation.
Note
The model and code are available for non-commercial research purposes only.