Awesome
<h1 align="center"> <img src="labelme/icons/icon.png"><br/>labelme </h1> <h4 align="center"> Image Polygonal Annotation with Python </h4> <div align="center"> <a href="https://pypi.python.org/pypi/labelme"><img src="https://img.shields.io/pypi/v/labelme.svg"></a> <a href="https://pypi.org/project/labelme"><img src="https://img.shields.io/pypi/pyversions/labelme.svg"></a> <a href="https://github.com/labelmeai/labelme/actions"><img src="https://github.com/labelmeai/labelme/workflows/ci/badge.svg?branch=main&event=push"></a> </div> <div align="center"> <a href="#starter-guide"><b>Starter Guide</b></a> | <a href="#installation"><b>Installation</b></a> | <a href="#usage"><b>Usage</b></a> | <a href="#examples"><b>Examples</b></a> <!-- | <a href="https://github.com/labelmeai/labelme/discussions"><b>Community</b></a> --> <!-- | <a href="https://www.youtube.com/playlist?list=PLI6LvFw0iflh3o33YYnVIfOpaO0hc5Dzw"><b>Youtube FAQ</b></a> --> </div> <br/> <div align="center"> <img src="examples/instance_segmentation/.readme/annotation.jpg" width="70%"> </div>Description
Labelme is a graphical image annotation tool inspired by http://labelme.csail.mit.edu.
It is written in Python and uses Qt for its graphical interface.
<img src="examples/instance_segmentation/data_dataset_voc/JPEGImages/2011_000006.jpg" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClass/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassVisualization/2011_000006.jpg" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObject/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectVisualization/2011_000006.jpg" width="19%" />
<i>VOC dataset example of instance segmentation.</i>
<img src="examples/semantic_segmentation/.readme/annotation.jpg" width="30%" /> <img src="examples/bbox_detection/.readme/annotation.jpg" width="30%" /> <img src="examples/classification/.readme/annotation_cat.jpg" width="35%" />
<i>Other examples (semantic segmentation, bbox detection, and classification).</i>
<img src="https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/14256482/46932075-92145f00-d080-11e8-8d09-2162070ae57c.png" width="32%" />
<i>Various primitives (polygon, rectangle, circle, line, and point).</i>
Features
- Image annotation for polygon, rectangle, circle, line and point. (tutorial)
- Image flag annotation for classification and cleaning. (#166)
- Video annotation. (video annotation)
- GUI customization (predefined labels / flags, auto-saving, label validation, etc). (#144)
- Exporting VOC-format dataset for semantic/instance segmentation. (semantic segmentation, instance segmentation)
- Exporting COCO-format dataset for instance segmentation. (instance segmentation)
Starter Guide
If you're new to Labelme, you can get started with Labelme Starter (FREE), which contains:
- Installation guides for all platforms: Windows, macOS, and Linux 💻
- Step-by-step tutorials: first annotation to editing, exporting, and integrating with other programs 📕
- A compilation of valuable resources for further exploration 🔗.
Installation
There are options:
- Platform agnostic installation: Anaconda
- Platform specific installation: Ubuntu, macOS, Windows
- Pre-build binaries from the release section
Anaconda
You need install Anaconda, then run below:
# python3
conda create --name=labelme python=3
source activate labelme
# conda install -c conda-forge pyside2
# conda install pyqt
# pip install pyqt5 # pyqt5 can be installed via pip on python3
pip install labelme
# or you can install everything by conda command
# conda install labelme -c conda-forge
Ubuntu
sudo apt-get install labelme
# or
sudo pip3 install labelme
# or install standalone executable from:
# https://github.com/labelmeai/labelme/releases
# or install from source
pip3 install git+https://github.com/labelmeai/labelme
macOS
brew install pyqt # maybe pyqt5
pip install labelme
# or install standalone executable/app from:
# https://github.com/labelmeai/labelme/releases
# or install from source
pip3 install git+https://github.com/labelmeai/labelme
Windows
Install Anaconda, then in an Anaconda Prompt run:
conda create --name=labelme python=3
conda activate labelme
pip install labelme
# or install standalone executable/app from:
# https://github.com/labelmeai/labelme/releases
# or install from source
pip3 install git+https://github.com/labelmeai/labelme
Usage
Run labelme --help
for detail.
The annotations are saved as a JSON file.
labelme # just open gui
# tutorial (single image example)
cd examples/tutorial
labelme apc2016_obj3.jpg # specify image file
labelme apc2016_obj3.jpg -O apc2016_obj3.json # close window after the save
labelme apc2016_obj3.jpg --nodata # not include image data but relative image path in JSON file
labelme apc2016_obj3.jpg \
--labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list
# semantic segmentation example
cd examples/semantic_segmentation
labelme data_annotated/ # Open directory to annotate all images in it
labelme data_annotated/ --labels labels.txt # specify label list with a file
Command Line Arguments
--output
specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.- The first time you run labelme, it will create a config file in
~/.labelmerc
. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the--config
flag. - Without the
--nosortlabels
flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided. - Flags are assigned to an entire image. Example
- Labels are assigned to a single polygon. Example
FAQ
- How to convert JSON file to numpy array? See examples/tutorial.
- How to load label PNG file? See examples/tutorial.
- How to get annotations for semantic segmentation? See examples/semantic_segmentation.
- How to get annotations for instance segmentation? See examples/instance_segmentation.
Examples
- Image Classification
- Bounding Box Detection
- Semantic Segmentation
- Instance Segmentation
- Video Annotation
How to develop
git clone https://github.com/labelmeai/labelme.git
cd labelme
# Install anaconda3 and labelme
curl -L https://github.com/wkentaro/dotfiles/raw/main/local/bin/install_anaconda3.sh | bash -s .
source .anaconda3/bin/activate
pip install -e .
How to build standalone executable
Below shows how to build the standalone executable on macOS, Linux and Windows.
# Setup conda
conda create --name labelme python=3.9
conda activate labelme
# Build the standalone executable
pip install .
pip install 'matplotlib<3.3'
pip install pyinstaller
pyinstaller labelme.spec
dist/labelme --version
How to contribute
Make sure below test passes on your environment.
See .github/workflows/ci.yml
for more detail.
pip install -r requirements-dev.txt
ruff format --check # `ruff format` to auto-fix
ruff check # `ruff check --fix` to auto-fix
MPLBACKEND='agg' pytest -vsx tests/
Acknowledgement
This repo is the fork of mpitid/pylabelme.