Awesome
napari-SAM4IS
napari plugin for instance and semantic segmentation annotation using Segment Anything Model (SAM)
This is a plugin for napari, a multi-dimensional image viewer for Python, that allows for instance and semantic segmentation annotation. This plugin provides an easy-to-use interface for annotating images with the option to output annotations as COCO format.
This napari plugin was generated with Cookiecutter using @napari's cookiecutter-napari-plugin template.
<!-- Don't miss the full getting started guide to set up your new package: https://github.com/napari/cookiecutter-napari-plugin#getting-started and review the napari docs for plugin developers: https://napari.org/stable/plugins/index.html -->Installation
To use this plugin, you'll need to install the napari multi-dimensional image viewer and the Segment Anything Model (SAM) library.
napari Installation
You can install napari using pip:
pip install "napari[all]"
Alternatively, you can install napari and all of its dependencies with conda:
conda install -c conda-forge napari
For more detailed instructions, please refer to the napari installation guide.
SAM Installation
You can install SAM from the official GitHub repository using pip:
pip install git+https://github.com/facebookresearch/segment-anything.git
Or you can install from source by cloning the repository:
git clone https://github.com/facebookresearch/segment-anything.git
cd segment-anything
pip install -e .
For more detailed instructions, please refer to the SAM installation guide.
napari-SAM4IS Installation
You can install napari-SAM4IS
via pip:
pip install napari-SAM4IS
To install latest development version :
pip install git+https://github.com/hiroalchem/napari-SAM4IS.git
Usage
Preparation
- Open an image in napari and launch the plugin. (Opening an image after launching the plugin is also possible.)
- Upon launching the plugin, three layers will be automatically created: SAM-Box, SAM-Predict, and Accepted. The usage of these layers will be explained later.
- In the widget that appears, select the model you want to use and click the load button. (The default option is recommended.)
- Next, select the image layer you want to annotate.
- Then, select whether you want to do instance segmentation or semantic segmentation. (Note that for 3D images, semantic segmentation should be chosen in the current version.)
- Finally, select the output layer as "shapes" for instance segmentation or "labels" for semantic segmentation. (For instance segmentation, the "Accept" layer can also be used.)
Annotation
- Select the SAM-Box layer and use the rectangle tool to enclose the object you want to segment.
- An automatic segmentation mask will be created and output to the SAM-Predict layer.
- If you want to make adjustments, do so in the SAM-Predict layer.
- To accept or reject the annotation, press "a" or "r" on the keyboard, respectively.
- If you accept the annotation, it will be output as label 1 for semantic segmentation or converted to a polygon and output to the designated layer for instance segmentation.
- If you reject the annotation, the segmentation mask in the SAM-Predict layer will be discarded.
- After accepting or rejecting the annotation, the SAM-Predict layer will automatically reset to blank and return to the SAM-Box layer.
Saving
- If you have output to the labels layer, use napari's standard functionality to save the mask.
- If you have output to the shapes layer, you can save the shapes layer using napari's standard functionality, or you can click the "save" button to output a JSON file in COCO format for each image in the folder. (The JSON file will have the same name as the image.)
Contributing
Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.
License
Distributed under the terms of the Apache Software License 2.0 license, "napari-SAM4IS" is free and open source software
Issues
If you encounter any problems, please file an issue along with a detailed description.