Awesome
Optimization with JDTLoss and Evaluation with Fine-grained mIoUs for Semantic Segmentation
Models
<details> <summary>Methods</summary> </details> <details> <summary>Backbones</summary>- ResNet18/34/50/101/152
- ConvNeXt-B
- Xception65
- EfficientNet-B0
- MobileNetV2
- MiTB0/B1/B2/B3/B4
- MobileViTV2
timm
Datasets
<details> <summary>Urban</summary> </details> <details> <summary>"Thing" & "Stuff"</summary> </details> <details> <summary>Aerial</summary> </details> <details> <summary>Medical</summary> </details>Metrics
<details> <summary>Accuracy</summary>- $\text{Acc}$
- $\text{mAcc}^\text{D, I, C}$
- $\text{mIoU}^\text{D, I, C}$
- $\text{mDice}^\text{D, I, C}$
- Worst-case metrics
- $\text{ECE}^\text{D, I}$
- $\text{SCE}^\text{D, I}$
Prerequisites
Requirements
- Hardware: at least 12GB GPU
- Software:
timm
MMSegmentation
(only for preparing datasets)
Prepapre Datasets
Usage
- To use JDTloss in your codebase
from losses.jdt_loss import JDTLoss
# Jaccard loss (default): JDTLoss()
# Dice loss: JDTLoss(alpha=0.5, beta=0.5)
criterion = JDTLoss()
for (image, label) in data_loader:
logits = model(image)
loss = criterion(logits, label)
- To use fine-grained mIoUs in your codebase
from metrics.metric_group import MetricGroup
metric_group = MetricGroup(num_classes=..., ignore_index=...)
for (image, label) in data_loader:
logits = model(image)
prob = logits.log_softmax(dim=1).exp()
# Both `prob` and `label` need to be on CPU
metric_group.add(prob, label)
results = metric_group.value()
- Training with hard labels
python main.py \
--data_dir "path/to/data_dir" \
--output_dir "path/to/output_dir" \
--model_yaml "deeplabv3plus_resnet101d" \
--data_yaml "cityscapes" \
--label_yaml "hard" \
--loss_yaml "jaccard_ic_present_all" \
--schedule_yaml "40k_iters" \
--optim_yaml "adamw_lr6e-5" \
--test_yaml "test_iou"
-
Training with soft labels
<details> <summary>Label Smoothing</summary>- You may need to tune $\epsilon$ accordingly.
</details> <details> <summary>Knowledge Distillation</summary>python main.py \ --data_dir "path/to/data_dir" \ --output_dir "path/to/output_dir" \ --model_yaml "deeplabv3_resnet50d" \ --data_yaml "cityscapes" \ --label_yaml "ls" \ --loss_yaml "jaccard_d_present_all" \ --schedule_yaml "40k_iters" \ --optim_yaml "adamw_lr6e-5" \ --test_yaml "test_iou"
- Step 1: Train a teacher with label smoothing. You are encouraged to repeat the training script at least three times and choose the model with the best performance as the teacher.
- Step 2: Train a student.
</details> <details> <summary>Multiple Annotators</summary>python main.py \ --teacher_checkpoint "path/to/teacher_checkpoint" --data_dir "path/to/data_dir" \ --output_dir "path/to/output_dir" \ --model_yaml "deeplabv3_resnet18d" \ --teacher_model_yaml "deeplabv3_resnet50d" \ --data_yaml "cityscapes" \ --label_yaml "kd" \ --loss_yaml "jaccard_d_present_all" \ --schedule_yaml "40k_iters" \ --optim_yaml "adamw_lr6e-5" \ --test_yaml "test_iou"
</details>python main.py \ --data_dir "path/to/data_dir" \ --output_dir "path/to/output_dir" \ --model_yaml "unet_resnet50d" \ --data_yaml "qubiq_brain_growth_fold0_task0" \ --label_yaml "mr" \ --loss_yaml "jaccard_d_present_all" \ --schedule_yaml "150_epochs" \ --optim_yaml "adamw_lr6e-5" \ --test_yaml "test_iou"
FAQs
Acknowledgements
We express our gratitude to the creators and maintainers of the following projects: pytorch-image-models
, MMSegmentation
, segmentation_models.pytorch
, structure_knowledge_distillation
Citations
@InProceedings{Wang2023Revisiting,
title = {Revisiting Evaluation Metrics for Semantic Segmentation: Optimization and Evaluation of Fine-grained Intersection over Union},
author = {Wang, Zifu and Berman, Maxim and Rannen-Triki, Amal and Torr, Philip H.S. and Tuia, Devis and Tuytelaars, Tinne and Van Gool, Luc and Yu, Jiaqian and Blaschko, Matthew B.},
booktitle = {NeurIPS},
year = {2023}
}
@InProceedings{Wang2023Jaccard,
title = {Jaccard Metric Losses: Optimizing the Jaccard Index with Soft Labels},
author = {Wang, Zifu and Ning, Xuefei and Blaschko, Matthew B.},
booktitle = {NeurIPS},
year = {2023}
}
@InProceedings{Wang2023Dice,
title = {Dice Semimetric Losses: Optimizing the Dice Score with Soft Labels},
author = {Wang, Zifu and Popordanoska, Teodora and Bertels, Jeroen and Lemmens, Robin and Blaschko, Matthew B.},
booktitle = {MICCAI},
year = {2023}
}