Home

Awesome

OpenLabeling: open-source image and video labeler

GitHub stars

Image labeling in multiple annotation formats:

<img src="https://media.giphy.com/media/l49JDgDSygJN369vW/giphy.gif" width="40%"><img src="https://media.giphy.com/media/3ohc1csRs9PoDgCeuk/giphy.gif" width="40%"> <img src="https://media.giphy.com/media/3o752fXKwTJJkhXP32/giphy.gif" width="40%"><img src="https://media.giphy.com/media/3ohc11t9auzSo6fwLS/giphy.gif" width="40%">

Citation

This project was developed for the following paper, please consider citing it:

@INPROCEEDINGS{8594067,
  author={J. {Cartucho} and R. {Ventura} and M. {Veloso}},
  booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots}, 
  year={2018},
  pages={2336-2341},
}

Latest Features

Table of contents

Quick start

To start using the YOLO Bounding Box Tool you need to download the latest release or clone the repo:

git clone --recurse-submodules git@github.com:Cartucho/OpenLabeling.git

Prerequisites

You need to install:

Alternatively, you can install everything at once by simply running:

python -mpip install -U pip
python -mpip install -U -r requirements.txt

Run project

Step by step:

  1. Open the main/ directory

  2. Insert the input images and videos in the folder input/

  3. Insert the classes in the file class_list.txt (one class name per line)

  4. Run the code:

  5. You can find the annotations in the folder output/

    python main.py [-h] [-i] [-o] [-t] [--tracker TRACKER_TYPE] [-n N_FRAMES]
    
    optional arguments:
     -h, --help                Show this help message and exit
     -i, --input               Path to images and videos input folder | Default: input/
     -o, --output              Path to output folder (if using the PASCAL VOC format it's important to set this path correctly) | Default: output/
     -t, --thickness           Bounding box and cross line thickness (int) | Default: -t 1
     --tracker tracker_type    tracker_type being used: ['CSRT', 'KCF','MOSSE', 'MIL', 'BOOSTING', 'MEDIANFLOW', 'TLD', 'GOTURN', 'DASIAMRPN']
     -n N_FRAMES               number of frames to track object for
    

To use DASIAMRPN Tracker:

  1. Install the DaSiamRPN submodule and download the model (VOT) from google drive
  2. copy it into 'DaSiamRPN/code/'
  3. set default tracker in main.py or run it with --tracker DASIAMRPN

How to use the deep learning feature

Download the pre-trained model by clicking this link http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz and put it into object_detection/models. Create the models folder if necessary. Make sure to extract the model.

Note: Default model used in main_auto.py is ssdlite_mobilenet_v2_coco_2018_05_09. We can set graph_model_path in file main_auto.py to change the pretrain model

GUI usage

Keyboard, press:

<img src="https://github.com/Cartucho/OpenLabeling/blob/master/keyboard_usage.jpg">
KeyDescription
a/dprevious/next image
s/wprevious/next class
eedges
hhelp
qquit

Video:

KeyDescription
ppredict the next frames' labels

Mouse:

Authors