Home

Awesome

Fast Segment Anything

[📕Paper] [🤗HuggingFace Demo] [Colab demo] [Replicate demo & API] [OpenXLab Demo] [Model Zoo] [BibTeX] [Video Demo]

FastSAM Speed

The Fast Segment Anything Model(FastSAM) is a CNN Segment Anything Model trained using only 2% of the SA-1B dataset published by SAM authors. FastSAM achieves comparable performance with the SAM method at 50× higher run-time speed.

FastSAM design

🍇 Updates

Installation

Clone the repository locally:

git clone https://github.com/CASIA-IVA-Lab/FastSAM.git

Create the conda env. The code requires python>=3.7, as well as pytorch>=1.7 and torchvision>=0.8. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

conda create -n FastSAM python=3.9
conda activate FastSAM

Install the packages:

cd FastSAM
pip install -r requirements.txt

Install CLIP(Required if the text prompt is being tested.):

pip install git+https://github.com/openai/CLIP.git

<a name="GettingStarted"></a> Getting Started

First download a model checkpoint.

Then, you can run the scripts to try the everything mode and three prompt modes.

# Everything mode
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg
# Text prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg  --text_prompt "the yellow dog"
# Box prompt (xywh)
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --box_prompt "[[570,200,230,400]]"
# Points prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg  --point_prompt "[[520,360],[620,300]]" --point_label "[1,0]"

You can use the following code to generate all masks and visualize the results.

from fastsam import FastSAM, FastSAMPrompt

model = FastSAM('./weights/FastSAM.pt')
IMAGE_PATH = './images/dogs.jpg'
DEVICE = 'cpu'
everything_results = model(IMAGE_PATH, device=DEVICE, retina_masks=True, imgsz=1024, conf=0.4, iou=0.9,)
prompt_process = FastSAMPrompt(IMAGE_PATH, everything_results, device=DEVICE)

# everything prompt
ann = prompt_process.everything_prompt()

prompt_process.plot(annotations=ann,output_path='./output/dog.jpg',)

For point/box/text mode prompts, use:

# bbox default shape [0,0,0,0] -> [x1,y1,x2,y2]
ann = prompt_process.box_prompt(bboxes=[[200, 200, 300, 300]])

# text prompt
ann = prompt_process.text_prompt(text='a photo of a dog')

# point prompt
# points default [[0,0]] [[x1,y1],[x2,y2]]
# point_label default [0] [1,0] 0:background, 1:foreground
ann = prompt_process.point_prompt(points=[[620, 360]], pointlabel=[1])

prompt_process.plot(annotations=ann,output_path='./output/dog.jpg',)

You are also welcomed to try our Colab demo: FastSAM_example.ipynb.

Different Inference Options

We provide various options for different purposes, details are in MORE_USAGES.md.

Training or Validation

Training from scratch or validation: Training and Validation Code.

Web demo

Gradio demo

# Download the pre-trained model in "./weights/FastSAM.pt"
python app_gradio.py

HF_Everyhting HF_Points

Replicate demo

Replicate-1 Replicate-2 Replicate-3

<a name="Models"></a>Model Checkpoints

Two model versions of the model are available with different sizes. Click the links below to download the checkpoint for the corresponding model type.

Results

All result were tested on a single NVIDIA GeForce RTX 3090.

1. Inference time

Running Speed under Different Point Prompt Numbers(ms).

methodparams110100E(16x16)E(32x32*)E(64x64)
SAM-H0.6G44646462785220996972
SAM-B136M11012523043213835417
FastSAM68M404040404040

2. Memory usage

DatasetMethodGPU Memory (MB)
COCO 2017FastSAM2608
COCO 2017SAM-H7060
COCO 2017SAM-B4670

3. Zero-shot Transfer Experiments

Edge Detection

Test on the BSDB500 dataset.

methodyearODSOISAPR50
HED2015.788.808.840.923
SAM2023.768.786.794.928
FastSAM2023.750.790.793.903

Object Proposals

COCO
methodAR10AR100AR1000AUC
SAM-H E6415.545.667.732.1
SAM-H E3218.549.562.533.7
SAM-B E3211.439.659.127.3
FastSAM15.747.363.732.2
LVIS

bbox AR@1000

methodallsmallmed.large
ViTDet-H65.053.283.391.2
zero-shot transfer methods
SAM-H E6452.136.675.188.2
SAM-H E3250.333.176.289.8
SAM-B E3245.029.368.780.6
FastSAM57.144.377.185.3

Instance Segmentation On COCO 2017

methodAPAPSAPMAPL
ViTDet-H.510.320.543.689
SAM.465.308.510.617
FastSAM.379.239.434.500

4. Performance Visualization

Several segmentation results:

Natural Images

Natural Images

Text to Mask

Text to Mask

5.Downstream tasks

The results of several downstream tasks to show the effectiveness.

Anomaly Detection

Anomaly Detection

Salient Object Detection

Salient Object Detection

Building Extracting

Building Detection

License

The model is licensed under the Apache 2.0 license.

Acknowledgement

Contributors

Our project wouldn't be possible without the contributions of these amazing people! Thank you all for making this project better.

<a href="https://github.com/CASIA-IVA-Lab/FastSAM/graphs/contributors"> <img src="https://contrib.rocks/image?repo=CASIA-IVA-Lab/FastSAM" /> </a>

Citing FastSAM

If you find this project useful for your research, please consider citing the following BibTeX entry.

@misc{zhao2023fast,
      title={Fast Segment Anything},
      author={Xu Zhao and Wenchao Ding and Yongqi An and Yinglong Du and Tao Yu and Min Li and Ming Tang and Jinqiao Wang},
      year={2023},
      eprint={2306.12156},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Star History Chart