Home

Awesome

<p align="center"> <h1 align="center"><a href="https://arxiv.org/pdf/2409.14874" target="_blank">Towards Ground-truth-free Evaluation of Any Segmentation in Medical Images*</a></h1> </p> <h4 align="center"> <p> <a href="https://github.com/ahjolsenbics/EvanySeg/blob/main/README.md#Framework">Framework</a> | <a href="#-Citing Us">Citing Us</a> | <a href="#-Dataset">Dataset</a> | <a href="#-Getting Started">Getting Started</a> | <a href="#-Demo">Demo</a> | <a href="https://github.com/ahjolsenbics/EvanySeg">Page Main</a> <p> </h4> <p align="center"> <a href="https://github.com/facebookresearch/segment-anything"> <img alt="Model" src="https://img.shields.io/badge/Model-SAM%20and%20its%20variants-violet.svg"> </a> <a href="https://drive.google.com/drive/folders/1Ngme9APByRTAOOsLGtwzVYzS2Il4jc1n?usp=drive_link"> <img alt="Try Download EvanySeg checkpoints" src="https://colab.research.google.com/assets/colab-badge.svg"> </a> <a href="https://www.python.org/"> <img alt="Build" src="https://img.shields.io/badge/Made%20with-Python-1f425f.svg?color=purple"> </a> <a href="https://github.com/facebookresearch/segment-anything/blob/main/LICENSE"> <img alt="License" src="https://img.shields.io/github/license/confident-ai/deepeval.svg?color=turquoise"> </a> </p>

Framework

EvanySeg is a companion model to SAM and its variants, designed to enhance reliability and trustworthiness in the deployment of SAM (and its variants) on medical images.

<img src="./utils/readme_img/workflow.png">

Citing Us

If you're interested in learning more about EvanySeg, we would appreciate your references to our paper.

Dataset

The EvanySeg model was trained based on 2D images, accompanied by object-level ground truth masks. Segmentation predictions for training the EvanySeg model were generated using SAM, MedSAM, and SAM-Med2D.

The filesystem hierarchy of the dataset is as follows:

πŸ“ EvanySeg
β”œβ”€β”€ πŸ“ checkpoints
β”œβ”€β”€ πŸ“ result
β”œβ”€β”€ πŸ“ datasets
β”‚   β”œβ”€β”€ πŸ“ preprocess
β”‚   β”‚   └── πŸ“ train_sam_Polyp
β”‚   β”‚       β”œβ”€β”€ πŸ“ crop_image
β”‚   β”‚       β”‚       0_SAM_Polyp_train_175.png
β”‚   β”‚       β”œβ”€β”€ πŸ“ crop_mask
β”‚   β”‚       β”‚       0_SAM_Polyp_train_175.png
β”‚   β”‚       └── πŸ“ crop_predict
β”‚   β”‚       β”‚       0_SAM_Polyp_train_175.png
β”‚   └── πŸ“ raw          
β”‚       └── πŸ“ Polyp
β”‚           └── πŸ“ train
β”‚               β”œβ”€β”€ πŸ“ images
β”‚               β”‚       175.png
β”‚               └── πŸ“ masks
β”‚                       175.png

The processed data naming rules are as follows:

β”œβ”€crop_images
       {i}_{model_name}_{directory}_{part}_{sample_name}

Note: "i" represents the index of the connected domain being processed in the current iteration, "model_name" indicates the model SAM and its variants, "directory" represents the directory name of the dataset such as Polyp, "part" indicates the subdirectory, sample_name, "sample_name" indicates the original name of the image

Getting Started

Download the datasets and pre-trained models to the corresponding folders, and configure the environment.If you plan to train your own dataset,please preprocess it first using the preprocessing.py file.

Download

Please download the EvanySeg result checkpoints to the result directory from ResNet101 result and Vit-b result.

The example datasets are provided train.zip and test.zip

Installation

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt

Test

python test.py

Train

python train.py

Demo

Online demo: coming soon.