Home

Awesome

autoprom-sam

<div align="center"> <img src="./resources/net.png" alt="autoprom-sam network architecture" /> </div>

This repository contains the code for the paper:

'PATHOLOGICAL PRIMITIVE SEGMENTATION BASED ON VISUAL FOUNDATION MODEL WITH ZERO-SHOT MASK GENERATION'

Authors:

Digital Slide Archive Plugin

autoprom-sam can also be converted into a Digital Slide Archive plugin.

Below is the demo for the modified version of DSA called AIM UI.

<div align="center"> <img src="./resources/aimui.gif" alt="AIM UI Demo" /> </div>

Folder Structure

This project structure is given below. and some important files are also mentioned.

šŸ“ .
ā”œā”€ā”€ šŸ“ autoprom_sam
ā”‚   ā”œā”€ā”€ šŸ“ configs
ā”‚   ā”œā”€ā”€ šŸ“ dataloaders
ā”‚   ā”œā”€ā”€ šŸ“ datasets
ā”‚   ā”œā”€ā”€ šŸ“ dataset_utils
ā”‚   ā”œā”€ā”€ šŸ“ model
ā”‚   ā”œā”€ā”€ šŸ“ training
ā”‚   ā””ā”€ā”€ šŸ“ utils
ā”œā”€ā”€ šŸ“„ LICENSE
ā”œā”€ā”€ šŸ“ notebooks
ā”‚   ā”œā”€ā”€ šŸ“„ check_pannuke_dataloader.ipynb
ā”‚   ā””ā”€ā”€ šŸ“„ inference_on_pannuke.ipynb
ā”œā”€ā”€ šŸ“ runs
ā”œā”€ā”€ šŸ“ sam_weights
ā”‚   ā””ā”€ā”€ šŸ“„ sam weights path
ā””ā”€ā”€ šŸ“„ setup.py

šŸ“ training
ā”œā”€ā”€ šŸ“„ trainer.py

šŸ“ model
~
ā”œā”€ā”€ šŸ“„ Detmodel.py
~

Environment Setup

To setup the environment, first create a conda environment and install all the necessary packages.

Note: This code only supports single GPU training for now. It was developed on a local computer with only a single GPU so multi-GPU training and inference haven't been tested.

Clone this repository and from the root folder run the following commands to install the package.

conda activate autoprom_sam
python -m pip install -e .

Inference

For inference, we have compiled a Jupyter notebook which can be found here.

Example Inference

<div align="center"> <img src="./resources/bbox.png" alt="Bounding Box" width="400" /> <img src="./resources/overlay.png" alt="Overlay" width="400" /> </div>

Dataset Preparation

While any dataset could be adopted to work with autoprom-sam, for training the Bounding Box decoder, the boxes need to be in the format [cx,cy,w,h].

For reference, you can look at the file here.

If you choose to use a CSV file like ours, then it should have the following columns:

Note: For our purpose, we have obtained the mask file from Image_File path during inference. In order to use the training scripts, it is necessary that the dataloader returns (id, img, mask, inst_mask, annot), but only the img and annot is used during training. So, if you use a custom dataloader, you can fill these values using None. For example: (None, img, None, None, annot,)

Training

To train the network, follow these steps:

  1. Navigate to the root directory by executing the following command in your terminal:

    cd /home/{the directory where the repo was cloned}
    
  2. Once in the root directory, run the training script:

    python autoprom_sam/training/trainer.py
    

You can modify the hyperparameters for training using config.py. The configurations for both the pannuke and FTU datasets are provided in this file. Here are some important configuration options:

Note: To train the network, you must have downloaded the SAM-B model's checkpoint. If you don't have the weights, the encoder will output random values since we froze the network during training.

Reference

@article{kirillov2023segany,
  title={Segment Anything},
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
  journal={arXiv:2304.02643},
  year={2023}
}

@article{graham2019hover,
  title={Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images},
  author={Graham, Simon and Vu, Quoc Dang and Raza, Shan E Ahmed and Azam, Ayesha and Tsang, Yee Wah and Kwak, Jin Tae and Rajpoot, Nasir},
  journal={Medical Image Analysis},
  pages={101563},
  year={2019},
  publisher={Elsevier}
}