Home

Awesome

Attention!I failed to open source the trained model! You may need to train by yourselves.

Corner Proposal Network for Anchor-free, Two-stage Object Detection

by Kaiwen Duan, Lingxi Xie, Honggang Qi, Song Bai, Qingming Huang and Qi Tian

The code to train and evaluate the proposed CPN is available here. For more technical details, please refer to our arXiv paper.

We thank Princeton Vision & Learning Lab for providing the original implementation of CornerNet. We also refer to some codes from mmdetection and Objects as Points, we thank them for providing their implementations.

CPN is an anchor-free, two-stage detector which gets trained from scratch. On the MS-COCO dataset, CPN achieves an AP of 49.2%, which is competitive among state-of-the-art object detection methods. In the scenarios that require faster inference speed, CPN can be further accelerated by properly replacing with a lighter backbone (e.g., DLA-34) and not using flip augmentation at the inference stage. In this configuration, CPN reports a 41.6 AP at 26.2 FPS (full test) and a 39.7 AP at 43.3 FPS.

Abstract

The goal of object detection is to determine the class and location of objects in an image. This paper proposes a novel anchor-free, two-stage framework which first extracts a number of object proposals by finding potential corner keypoint combinations and then assigns a class label to each proposal by a standalone classification stage. We demonstrate that these two stages are effective solutions for improving recall and precision, respectively, and they can be integrated into an end-to-end network. Our approach, dubbed Corner Proposal Network (CPN) enjoys the ability to detect objects of various scales and also avoids being confused by a large number of false-positive proposals. On the MS-COCO dataset, CPN achieves an AP of 49.2% which is competitive among state-of-the-art object detection methods. CPN can also fit scenarios that desire for network efficiency. Equipping with a lighter backbone and switching off image flip in inference, CPN achieves 41.6% at 26.2 FPS or 39.7% at 43.3 FPS, surpassing most competitors with the same inference speed.

Introduction

CPN is a framework for object detection with deep convolutional neural networks. You can use the code to train and evaluate a network for object detection on the MS-COCO dataset.

If you encounter any problems in using our code, please contact Kaiwen Duan: kaiwenduan@outlook.com

Architecture

Network_Structure

AP(%) on COCO test-dev and Models

BackboneInput SizeAPAP<sub>50</sub>AP<sub>75</sub>AP<sub>S</sub>AP<sub>M</sub>AP<sub>L</sub>
DLA34ori.41.758.944.920.244.156.4
DLA34 <a href="https://www.codecogs.com/eqnedit.php?latex=\leq&space;1.8\times" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\leq&space;1.8\times" title="\leq 1.8\times" /></a>44.562.348.325.246.758.2
HG52ori.43.961.647.523.946.357.1
HG52 <a href="https://www.codecogs.com/eqnedit.php?latex=\leq&space;1.8\times" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\leq&space;1.8\times" title="\leq 1.8\times" /></a>45.863.949.726.848.459.4
HG104ori.47.065.051.026.550.260.7
HG104 <a href="https://www.codecogs.com/eqnedit.php?latex=\leq&space;1.8\times" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\leq&space;1.8\times" title="\leq 1.8\times" /></a>49.267.353.731.051.962.4

Notes:

Comparison of AR(%) on COCO validation set

MethodBackboneARAR<sub>1+</sub>AR<sub>2+</sub>AR<sub>3+</sub>AR<sub>4+</sub>AR<sub>5:1</sub>AR<sub>6:1</sub>AR<sub>7:1</sub>AR<sub>8:1</sub>
Faster R-CNNX-101-64x4d57.673.877.579.286.243.843.034.323.2
FCOSX-101-64x4d64.982.387.989.895.045.540.834.123.4
CornerNetHG-10466.885.892.695.598.550.148.340.436.5
CenterNetHG-10466.887.193.295.296.950.745.640.132.3
CPNHG-10468.888.293.795.899.154.450.646.235.4

Notes:

Inference speed COCO validation set

BackboneInput SizeFlipAPFPS
HG52ori.Yes43.89.9
HG520.7x ori.No37.724.0
HG104ori.Yes46.87.3
HG1040.7x ori.No40.517.9
DLA34ori.Yes41.626.2
DLA34ori.No39.743.3

Notes:

Preparation

Please first install Anaconda and create an Anaconda environment using the provided package list.

conda create --name CPN --file conda_packagelist.txt

After you create the environment, activate it.

source activate CPN

Installing some APIs

cd code

and

python setup.py

Downloading MS COCO Data

Training and Evaluation

To train CPN104 or CPN52 or CPN_DLA34:

python train.py --cfg_file HG104

or

python train.py --cfg_file HG52

or

python train.py --cfg_file DLA34

We provide the configuration file config/HG104.json, config/HG52.json and config/DLA34.json in this repo. If you want to train you own CPN, please adjust the batch size in corresponding onfiguration files to accommodate the number of GPUs that are available to you. Note that if you want train DLA34, you need to firstly download the pre-trained model, and put it under CPN/cache/nnet/DLA34/pretrain.

To use the trained model:

python test.py --cfg_file HG104 --testiter 220000 --split <split>

or

python test.py --cfg_file HG52 --testiter 270000 --split <split>

or

python test.py --cfg_file DLA34 --testiter 270000 --split <split>

where <split> = validation or testing.

You need to download the corresponding models and put them under CPN/cache/nnet.

You can add --no_flip for testing without flip augmentation.

You can also add --debug to visualize some detection results (uncomment the codes from line 152 to 182 in CPN/code/test/coco.py).

We also include a configuration file for multi-scale evaluation, which is HG104-multi_scale.json and HG52-multi_scale.json and DLA34-multi_scale.json in this repo, respectively.

To use the multi-scale configuration file:

python test.py --cfg_file HG104 --testiter <iter> --split <split> --suffix multi_scale

or

python test.py --cfg_file HG52 --testiter <iter> --split <split> --suffix multi_scale

or

python test.py --cfg_file DLA34 --testiter <iter> --split <split> --suffix multi_scale