Home

Awesome

MEDIAR: Harmony of Data-Centric and Model-Centric for Multi-Modality Microscopy

1-2

This repository provides an official implementation of MEDIAR: MEDIAR: Harmony of Data-Centric and Model-Centric for Multi-Modality Microscopy, which achieved the "1st winner" in the NeurIPS-2022 Cell Segmentation Challenge.

To access and try mediar directly, please see links below.

1. MEDIAR Overview

<img src="./image/mediar_framework.png" width="1200"/>

MEIDAR is a framework for efficient cell instance segmentation of multi-modality microscopy images. The above figure illustrates an overview of our approach. MEDIAR harmonizes data-centric and model-centric approaches as the learning and inference strategies, achieving a 0.9067 Mean F1-score on the validation datasets. We provide a brief description of methods that combined in the MEDIAR. Please refer to our paper for more information.

2. Methods

Data-Centric

Model-Centric

<img src="./image/mediar_model.PNG" width="1200"/>

3. Experiments

Dataset

Testing steps

Preprocessing & Augmentations

StrategyTypeProbability
ClipPre-processing.
NormalizationPre-processing.
Scale IntensityPre-processing.
ZoomSpatial Augmentation0.5
Spatial CropSpatial Augmentation1.0
Axis FlipSpatial Augmentation0.5
RotationSpatial Augmentation0.5
Cell-Aware IntensityIntensity Augmentation0.25
Gaussian NoiseIntensity Augmentation0.25
Contrast AdjustmentIntensity Augmentation0.25
Gaussian SmoothingIntensity Augmentation0.25
Histogram ShiftIntensity Augmentation0.25
Gaussian SharpeningIntensity Augmentation0.25
Boundary ExclusionOthers.
Learning SetupsPretrainingFine-tuning
Initialization (Encoder)Imagenet-1k pretrainedfrom Pretraining
Initialization (Decoder, Head)He normal initializationfrom Pretraining
Batch size99
Total epochs80 (60)200 (25)
OptimizerAdamWAdamW
Initial learning rate (lr)5e-52e-5
Lr decay scheduleCosine scheduler (100 interval)Cosine scheduler (100 interval)
Loss functionMSE, BCEMSE, BCE

4. Results

Validation Dataset

Test Dataset

F1_osilab RunningTime_osilab

5. Reproducing

Our Environment

Computing Infrastructure
SystemUbuntu 18.04.5 LTS
CPUAMD EPYC 7543 32-Core Processor CPU@2.26GHz
RAM500GB; 3.125MT/s
GPU (number and type)NVIDIA A5000 (24GB) 2ea
CUDA version11.7
Programming languagePython 3.9
Deep learning frameworkPytorch (v1.12, with torchvision v0.13.1)
Code dependenciesMONAI (v0.9.0), Segmentation Models (v0.3.0)
Specific dependenciesNone

To install requirements:

pip install -r requirements.txt
wandb off

Dataset

  Root
  ├── Datasets
  │   ├── images (images can have various extensions: .tif, .tiff, .png, .bmp ...)
  │   │    ├── cell_00001.png
  │   │    ├── cell_00002.tif
  │   │    ├── cell_00003.xxx
  │   │    ├── ...  
  │   └── labels (labels must have .tiff extension.)
  │   │    ├── cell_00001_label.tiff 
  │   │    ├── cell_00002.label.tiff
  │   │    ├── cell_00003.label.tiff
  │   │    ├── ...
  └── ...

Before execute the codes, run the follwing code to generate path mappting json file:

python ./generate_mapping.py --root=<path_to_data>

Training

To train the model(s) in the paper, run the following command:

python ./main.py --config_path=<path_to_config>

Configuration files are in ./config/*. We provide the pretraining, fine-tuning, and prediction configs. You can refer to the configuration options in the ./config/mediar_example.json. We also implemented the official challenge baseline code in our framework. You can run the baseline code by running the ./config/baseline.json.

Inference

To conduct prediction on the testing cases, run the following command:

python predict.py --config_path=<path_to_config>

Evaluation

If you have the labels run the following command for evaluation:

python ./evaluate.py --pred_path=<path_to_prediciton_results> --gt_path=<path_to_ground_truth_labels>

The configuration files for predict.py is slightly different. Please refer to the config files in ./config/step3_prediction/*.

Trained Models

You can download MEDIAR pretrained and finetuned models here:

Citation of this Work

@article{lee2022mediar,
  title={Mediar: Harmony of data-centric and model-centric for multi-modality microscopy},
  author={Lee, Gihun and Kim, SangMook and Kim, Joonkee and Yun, Se-Young},
  journal={arXiv preprint arXiv:2212.03465},
  year={2022}
}