Home

Awesome

CLIP-SAM

Small experiment on combining CLIP with SAM to do open-vocabulary image segmentation.

The approach is to first identify all the parts of an image using SAM, and then use CLIP to find the ones that best match a specific description.

Usage

  1. Download weights and place them in this repos root.

  2. Install dependencies:

    pip install torch opencv-python Pillow
    pip install git+https://github.com/openai/CLIP.git
    pip install git+https://github.com/facebookresearch/segment-anything.git
  1. Run Notebook main.ipynb

Example

Example output for prompt "kiwi"

Image with segmentation

Example Image Source