Home

Awesome

ExtremeNet: Training and Evaluation Code

Code for bottom-up object detection by grouping extreme and center points:

Bottom-up Object Detection by Grouping Extreme and Center Points,
Xingyi Zhou, Jiacheng Zhuo, Philipp Krähenbühl,
CVPR 2019 (arXiv 1901.08043)

This project is developed upon the CornerNet code and contains the code from Deep Extreme Cut(DEXTR). Thanks to the original authors!

Contact: zhouxy2017@gmail.com. Any questions or discussions are welcomed!

Abstract

With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2% on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9%, much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6% Mask AP.

Installation

The code was tested with Anaconda Python 3.6 and PyTorch v0.4.1. After install Anaconda:

  1. Clone this repo:

    ExtremeNet_ROOT=/path/to/clone/ExtremeNet
    git clone --recursive https://github.com/xingyizhou/ExtremeNet $ExtremeNet_ROOT
    
  2. Create an Anaconda environment using the provided package list from Cornernet.

    conda create --name CornerNet --file conda_packagelist.txt
    source activate CornerNet
    
  3. Compiling NMS (originally from Faster R-CNN and Soft-NMS).

    cd $ExtremeNet_ROOT/external
    make
    

Demo

Data preparation

If you want to reproduce the results in the paper for benchmark evaluation and training, you will need to setup dataset.

Installing MS COCO APIs

cd $ExtremeNet_ROOT/data
git clone https://github.com/cocodataset/cocoapi.git coco
cd $ExtremeNet_ROOT/data/coco/PythonAPI
make
python setup.py install --user

Downloading MS COCO Data

Generate extreme point annotation from segmentation:

~~~
cd $ExtremeNet_ROOT/tools/
python gen_coco_extreme_points.py
~~~

It generates instances_extreme_train2017.json and instances_extreme_val2017.json in data/coco/annotations/.

Benchmark Evaluation

After downloading our pre-trained model and the dataset,

Training

You will need 5x 12GB GPUs to reproduce our training. Our model is fine-tuned on the 10-GPU pre-trained CornerNet model. After downloading the CornerNet model and put it in cache/, run

python train.py ExtremeNet

You can resume a half-trained model by

python train.py ExtremeNet --iter xxxx

Notes:

Citation

If you find this model useful for your research, please use the following BibTeX entry.

@inproceedings{zhou2019bottomup,
  title={Bottom-up Object Detection by Grouping Extreme and Center Points},
  author={Zhou, Xingyi and Zhuo, Jiacheng and Kr{\"a}henb{\"u}hl, Philipp},
  booktitle={CVPR},
  year={2019}
}

Please also considering citing the CornerNet paper (where this code is heavily borrowed from) and Deep Extreme Cut paper (if you use the instance segmentation part).

@inproceedings{law2018cornernet,
  title={CornerNet: Detecting Objects as Paired Keypoints},
  author={Law, Hei and Deng, Jia},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={734--750},
  year={2018}
}

@Inproceedings{Man+18,
  Title          = {Deep Extreme Cut: From Extreme Points to Object Segmentation},
  Author         = {K.K. Maninis and S. Caelles and J. Pont-Tuset and L. {Van Gool}},
  Booktitle      = {Computer Vision and Pattern Recognition (CVPR)},
  Year           = {2018}
}