Home

Awesome

PolygonRNN++

This is the official PyTorch reimplementation of Polygon-RNN++ (CVPR 2018). This repository allows you to train new Polygon-RNN++ models, and run our demo tool on local machines. For technical details, please refer to:

Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++
David Acuna*, Huan Ling*, Amlan Kar*, Sanja Fidler (* denotes equal contribution)
CVPR 2018
[Paper] [Video] [Project Page] [Demo]
<img src = "Docs/model.png" width="56%"/> <img src = "Docs/polydemo.gif" width="42%"/>

Where is the code?

To get the code, please signup here. We will be using GitHub to keep track of issues with the code and to update on availability of newer versions (also available on website and through e-mail to signed up users).

If you use this code, please cite:

@inproceedings{AcunaCVPR18,
title={Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++},
author={David Acuna and Huan Ling and Amlan Kar and Sanja Fidler},
booktitle={CVPR},
year={2018}
}

@inproceedings{CastrejonCVPR17,
title = {Annotating Object Instances with a Polygon-RNN},
author = {Lluis Castrejon and Kaustav Kundu and Raquel Urtasun and Sanja Fidler},
booktitle = {CVPR},
year = {2017}
}

Contents

  1. Reproduction Results
  2. Environment Setup
  3. Tool
    1. Backend
    2. Frontend
  4. Testing Models
  5. Training Models
    1. Data
    2. Training MLE Model
    3. Training RL Model
    4. Training Evaluator
    5. Training GGNN

Results

These are the reproduction results from this repository as compared to the paper

Training TypeNum first pointsLSTM Beam SizeBeforeNow
MLE + Att1165.4366.35
MLE + Att + RL1167.1767.45
MLE + Att + Evaluator5169.7271.05
MLE + Att + Evaluator5870.2170.91
MLE + Att + Evaluator + GGNN5871.3872.05
MLE + Att + Evaluator + GGNN51-72.08
MLE + Att + Evaluator + GGNN (Shared Encoder)58-72.22
MLE + Att + Evaluator + GGNN (Shared Encoder)51-72.33

Note: Benchmarked forward pass speed for the tool (with 5 first points and 1 beam size) is 0.3 seconds per interaction on a TitanXp

Note: Shared Encoder refers to sharing the Resnet between the graph network and the convLSTM network. In the original paper, the two networks were kept separate.

Environment Setup

All the code has been run and tested on Ubuntu 16.04, Python 2.7.12, Pytorch 0.4.0, CUDA 9.0, TITAN X/Xp and GTX 1080Ti GPUs

cd <path_to_downloaded_directory>
virtualenv env
source env/bin/activate
pip install -r requirements.txt
export PYTHONPATH=$PWD

Tool

Backend

python Tool/tool.py --exp Experiments/tool.json --reload <path_to_model> --port <port> --image_dir Tool/frontend/static/img/

Frontend

cd Tool/frontend/
python -m SimpleHTTPServer

Note: Replace SimpleHTTPServer with http.server if you are using python3 for the server

Note: You can setup your own image directory by editing Tool/frontend/static/js/polygon.js and passing that path to Tool/tool.py from the command line. This image directory MUST contain the pre-defined images that are defined in Tool/frontend/index.html

Testing Models

python Scripts/prediction/generate_annotation.py --exp <path_to_corresponding_experiment> --reload <path_to_checkpoint> --output_dir <path_to_store_predictions>
python Scripts/get_scores.py --pred <path_to_preds> --output <path_to_file_to_save_results>

Training Models

Data

Cityscapes

python Scripts/data/change_paths.py --city_dir <path_to_downloaded_leftImg8bit_folder> --json_dir <path_to_downloaded_annotation_file> --out_dir <output_dir>

Custom Dataset

To train on your custom datasets, you have one of two options:

Training

Training MLE model

python Scripts/train/train_ce.py --exp Experiments/mle.json --resume <optional_if_resuming_training>

Training RL model

python Scripts/train/train_rl.py --exp Experiments/rl.json --resume <optional_if_resuming_training>

Training Evaluator

python Scripts/train/train_evaluator.py --exp Experiments/evaluator.json --resume <optional_if_resuming_training>

Training GGNN

python Scripts/train/train_ggnn.py --exp Experiments/ggnn.json --resume <optional_if_resuming_training>