Home

Awesome

<h1 align="center">Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation</h1> <p align="center"> <a href="https://paperswithcode.com/sota/hierarchical-text-segmentation-on-hiertext?p=hi-sam-marrying-segment-anything-model-for"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/hi-sam-marrying-segment-anything-model-for/hierarchical-text-segmentation-on-hiertext"></a> <a href="https://arxiv.org/abs/2401.17904"><img src="https://img.shields.io/badge/arXiv-2401.17904-b31b1b.svg"></a> <a href="https://ieeexplore.ieee.org/document/10750316"><img src="https://img.shields.io/badge/TPAMI-2024-blue"></a> <a><img src="https://visitor-badge.laobi.icu/badge?page_id=ymy-k.Hi-SAM"></a> </p>

Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation.

[IEEE TPAMI 2024]

This is the official repository for Hi-SAM, a unified hierarchical text segmentation model. Refer to our paper for more details.

:fire: News

:sparkles: Highlight

overview

example

:bulb: Overview of Hi-SAM

Hi-SAM

:hammer_and_wrench: Install

Recommended: Linux Python 3.8 Pytorch 1.10 CUDA 11.1

conda create --name hi_sam python=3.8 -y
conda activate hi_sam
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
git clone https://github.com/ymy-k/Hi-SAM.git
cd Hi-SAM
pip install -r requirements.txt

:pushpin: Checkpoints

You can download the following model weights and put them in pretrained_checkpoint/.

ModelUsed DatasetWeightsfgIOUF-score
SAM-TS-BTotal-TextOneDrive80.9386.25
SAM-TS-LTotal-TextOneDrive84.5988.69
SAM-TS-HTotal-TextOneDrive84.8689.68
ModelUsed DatasetWeightsfgIOUF-score
SAM-TS-BTextSegOneDrive87.1592.81
SAM-TS-LTextSegOneDrive88.7793.79
SAM-TS-HTextSegOneDrive88.9693.87
ModelUsed DatasetWeightsfgIOUF-score
SAM-TS-BHierTextOneDrive73.3981.34
SAM-TS-LHierTextOneDrive78.3784.99
SAM-TS-HHierTextOneDrive79.2785.63
ModelUsed DatasetWeightsStroke F-scoreWord F-scoreText-Line F-scoreParagraph F-score
Efficient Hi-SAM-SHierTextOneDrive75.60waitingresults
Hi-SAM-BHierTextOneDrive79.7878.3482.1571.15
Hi-SAM-LHierTextOneDrive82.9081.8384.8574.49
Hi-SAM-HHierTextOneDrive83.3682.8685.3075.97

The results of Hi-SAM on the test set are reported here.

:star: Note:

  1. For faster downloading and saving storage, above checkpoints do not contain the parameters in SAM's ViT image encoder. Please follow segment-anything to achieve sam_vit_b_01ec64.pth, sam_vit_l_0b3195.pth, sam_vit_h_4b8939.pth and put them in pretrained_checkpoint/ for loading the frozen parameters in ViT image encoder.
  2. To train Hi-SAM in yourself, in addition to download the SAM weights, please also download the isolated mask decoder weights and put them in pretrained_checkpoint/ for initializing H-Decoder (or you can separate the mask decoder part from SAM weights in yourself). vit_b_maskdecoder.pth & vit_l_maskdecoder.pth & vit_h_maskdecoder.pth from segment-anything, vit_s_maskdecoder.pth from EfficientSAM. For example, if you want to train Hi-SAM-L, it looks like this in pretrained_checkpoint/:
|- pretrained_checkpoint
|  |- sam_vit_l_0b3195.pth
|  └  vit_l_maskdecoder.pth

:arrow_forward: Usage

1. Visualization Demo

1.1 Pixel-level Text (Stroke) Segmentation (for SAM-TS & Hi-SAM):

python demo_hisam.py --checkpoint pretrained_checkpoint/sam_tss_l_hiertext.pth --model-type vit_l --input demo/2e0cb33320757201.jpg --output demo/

To achieve better quality on small texts using sliding window, run the following script:

python demo_hisam.py --checkpoint pretrained_checkpoint/sam_tss_l_hiertext.pth --model-type vit_l --input demo/2e0cb33320757201.jpg --output demo/2e0cb33320757201_sliding.png --patch_mode

1.2 Word, Text-line, and Paragraph Segmentation (for Hi-SAM)

Run the following script for promptable segmentation on demo/img293.jpg:

python demo_hisam.py --checkpoint pretrained_checkpoint/hi_sam_l.pth --model-type vit_l --input demo/img293.jpg --output demo/ --hier_det

2. Evaluation

Please follow data_preparation.md to prepare the datasets at first.

2.1 Pixel-level Text (Stroke) Segmentation (for SAM-TS & Hi-SAM)

If you only want to evaluate the pixel-level text (stroke) segmentation part performance, run the following script:

python -m torch.distributed.launch --nproc_per_node=8 train.py --checkpoint <saved_model_path> --model-type <select_vit_type> --val_datasets hiertext_test --eval

If you want to evaluate the performance on HierText with sliding window inference, run the following scripts:

mkdir img_eval
python demo_hisam.py --checkpoint <saved_model_path> --model-type <select_vit_type> --input datasets/HierText/test/ --output img_eval/ --patch_mode
python eval_img.py

Using sliding window takes a relatively long time. For faster inference, you can divide the test images into multiple folders and conduct inference for each folder with an individual GPU.

2.2 Hierarchical Text Segmentation (for Hi-SAM)

For pixel-level text (stroke) performance, please follow section 2.1. For word, text-line, and paragraph level performance on HierText, please follow the subsequent steps.

Step 1: run the following scripts to get the required jsonl file:

python demo_amg.py --checkpoint <saved_model_path> --model-type <select_vit_type> --input datasets/HierText/test/ --total_points 1500 --batch_points 100 --eval
cd hiertext_eval
python collect_results.py --saved_name res_1500pts.jsonl

For faster inference, you can divide the test or validation images into multiple folders and conduct inference for each folder with an individual GPU.

Step 2: if you conduct inference on the test set of HierText, please submit the final jsonl file to the official website to achieve the evaluation metrics. If you conduct inference on the validation set: (1) follow HierText repo to download and achieve the validation ground-truth validation.jsonl. Put it in hiertext_eval/gt/. (2) Run the following script borrowed from HierText repo to get the evaluation metrics:

python eval.py --gt=gt/validation.jsonl --result=res_1500pts.jsonl --output=score.txt --mask_stride=1 --eval_lines --eval_paragraphs
cd ..

The evaluation process will take about 20 minutes. The evaluation metrics will be saved in thet file determined by --output.

3. Training

Please follow data_preparation.md to prepare the datasets and prepare the required pretrained weights mentioned in section Checkpoints.

3.1 Training Hi-SAM

For example, to train Hi-SAM-L on HierText:

python -m torch.distributed.launch --nproc_per_node=8 train.py --checkpoint ./pretrained_checkpoint/sam_vit_l_0b3195.pth --model-type vit_l --output work_dirs/hi_sam_l/ --batch_size_train 1 --lr_drop_epoch 130 --max_epoch_num 150 --train_datasets hiertext_train --val_datasets hiertext_val --hier_det --find_unused_params

The released models are trained on 8 V100 (32G) GPUs (Hi-SAM-L takes about 2 days). The saved models after the final epoch are used for evaluation.

3.2 Training SAM-TS

For example, to train SAM-TS-L on TextSeg:

python -m torch.distributed.launch --nproc_per_node=8 train.py --checkpoint ./pretrained_checkpoint/sam_vit_l_0b3195.pth --model-type vit_l --output work_dirs/sam_ts_l_textseg/ --batch_size_train 1 --max_epoch_num 70 --train_datasets textseg_train --val_datasets textseg_val

The released models are trained on 8 V100 (32G) GPUs (SAM-TS only takes a few hours). The best models on validation set are used for evaluation.

:eye: Applications

1. Promptable Multi-granularity Text Erasing and Inpainting

Combining Hi-SAM with Stable-Diffusion-inpainting for interactive text erasing and inpainting (click a single-point for word, text-line, or paragraph erasing and inpainting). You can see this project to implement the combination of Hi-SAM and Stable-Diffusion.

2. Text Detection

Only word level or only text-line level text detection. Directly segment contact text instance region instead of the shrunk text kernel region.

spotting

Two demo models are provided here: word_detection_totaltext.pth (trained on Total-Text, only for word detection). line_detection_ctw1500.pth, (trained on CTW1500, only for text-line detection). Put them in pretrained_checkpoint/. Then, for example, run the following script for word detection (only for the detection demo on Total-Text):

python demo_text_detection.py --checkpoint pretrained_checkpoint/word_detection_totaltext.pth --model-type vit_h --input demo/img643.jpg --output demo/ --dataset totaltext

For text-line detection (only for the detection demo on CTW1500):

python demo_text_detection.py --checkpoint pretrained_checkpoint/line_detection_ctw1500.pth --model-type vit_h --input demo/1165.jpg --output demo/ --dataset ctw1500

3. Promptable Scene Text Spotting

Combination with a single-point scene text spotter, SPTSv2. SPTSv2 can recognize scene texts but only predicts a single-point position for one instance. Providing the point position as prompt to Hi-SAM, the intact text mask can be achieved. Some demo figures are provided bellow, the green stars indicate the point prompts. The masks are generated by the word detection model in section 2. Text Detection.

spotting

:label: TODO

💗 Acknowledgement

:black_nib: Citation

If you find Hi-SAM helpful in your research, please consider giving this repository a :star: and citing:

@article{10750316,
  title={Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation},
  author={Ye, Maoyuan and Zhang, Jing and Liu, Juhua and Liu, Chenyu and Yin, Baocai and Liu, Cong and Du, Bo and Tao, Dacheng},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2024},
  volume={},
  number={},
  pages={1-16},
}