Awesome
SAM-OCTA2
中文版README: README_zh
1. Before You Start
SAM-OCTA2 is an extended segmentation method for SAM-OCTA in layer-sequential scanning, as OCTA and many other types of medical imaging samples are essentially stacked after layer-sequential scanning, making them inherently 3D. Thus, in form, it can correspond to object segmentation in videos.
Note: The memory required for this training is quite large, and I basically maxed out the A100's 80GB when the frame length is set to 8. Testing should be less demanding (but still requires preparation, hmm).
This project can be divided into two main parts: fine-tuning SAM-OCTA2 and processing the specific data modality of OCTA. Honestly, due to the considerable workload, I was quite busy, so I didn’t carefully summarize how to set up the environment and dependencies. I suggest running the key files and using the warnings to install the necessary packages via pip.
First, you should place a pre-trained weight file into the sam2_weights folder. The download links for the pre-trained weights are as follows:
base_plus (default): https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt
large: https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt
small: https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt
tiny: https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt
base_plus is the default model. If you want to use another size, please download the corresponding weights and modify the configuration in options.py:
...
parser.add_argument("-model_type", type=str, default="base_plus")
...
2. About Fine-tuning
Use train_sam_octa2.py to start fine-tuning:
python train_sam_octa2.py
I used a few samples from OCTA-500 as an example. If you need the complete dataset, you’ll need to contact the authors of the OCTA-500 dataset.
Relevant OCTA-500 paper: https://arxiv.org/abs/2012.07261
Place the original OCTA-500 dataset in this path:
2.1. Layer-sequential segmentation:
RV (clusters):
The images and annotations of the RV samples are separated, as each vessel must be assigned first. The same vessel may be split into two parts by the layer cuts. The annotation path is configured as shown, and the annotation files in this folder are generated by the mark_rv_objects method in utils.py.
The sample path configuration is shown below. The useful images are the last two, while the first one is the mask of the vessel region.
FAZ:
The FAZ sample configuration path is shown below. The layer images are formed by combining three images. The useful images are the last two, while the first one is only for preview and is not used by the model in this project.
The sample results and segmentation metrics will be recorded in the results folder (if it doesn’t exist, the folder will be automatically created).
Here are some examples of segmentation with prompt points. From left to right are the input image, annotation, and predicted result.
2.2 En-face projection segmentation
For the common en-face projection segmentation task, SAM-OCTA2 can also be used, but it requires re-fine-tuning. The sample path configuration is shown below. I have combined all the images used into one for preview.
3. Sparse annotation
The goal of sparse annotation is to use mature segmentation models to assist in annotation. The training and prediction codes are as follows:
sparse_annotation_rv_training.py and sparse_annotation_rv_prediction.py
The path and naming rules for the training dataset are shown below:
The layer-sequential images to be predicted are placed in this path:
4. Segmentation Results Preview
Layer-sequential
RV
FAZ
En-face Projection
RV
FAZ
5.Others
If you find this useful, please cite the relevant paper: https://arxiv.org/abs/2409.09286
Additional notes: The current paper is under review for conference submission, so more detailed weights and content will be released and added after acceptance.