Home

Awesome

Align Deep Features for Oriented Object Detection

Align Deep Features for Oriented Object Detection,
Jiaming Han<sup>*</sup>, Jian Ding<sup>*</sup>, Jie Li, Gui-Song Xia<sup></sup>,
arXiv preprint (arXiv:2008.09397) / TGRS (IEEE Xplore).

The repo is based on mmdetection.

Two versions are provided here: Original version and v20210104. We recommend to use v20210104 (i.e. the master branch).

Introduction

The past decade has witnessed significant progress on detecting objects in aerial images that are often distributed with large scale variations and arbitrary orientations. However most of existing methods rely on heuristically defined anchors with different scales, angles and aspect ratios and usually suffer from severe misalignment between anchor boxes and axis-aligned convolutional features, which leads to the common inconsistency between the classification score and localization accuracy. To address this issue, we propose a Single-shot Alignment Network (S<sup>2</sup>A-Net) consisting of two modules: a Feature Alignment Module (FAM) and an Oriented Detection Module (ODM). The FAM can generate high-quality anchors with an Anchor Refinement Network and adaptively align the convolutional features according to the corresponding anchor boxes with a novel Alignment Convolution. The ODM first adopts active rotating filters to encode the orientation information and then produces orientation-sensitive and orientation-invariant features to alleviate the inconsistency between classification score and localization accuracy. Besides, we further explore the approach to detect objects in large-size images, which leads to a better speed-accuracy trade-off. Extensive experiments demonstrate that our method can achieve state-of-the-art performance on two commonly used aerial objects datasets (i.e., DOTA and HRSC2016) while keeping high efficiency.

Changelog

Benchmark and model zoo

ModelBackboneMSRotateLr schdInf time (fps)box AP (ori./now)Download
RetinaNetR-50-FPN--1x16.068.05/68.40model
S<sup>2</sup>A-NetR-50-FPN--1x16.074.12/73.99model
S<sup>2</sup>A-NetR-50-FPN1x16.079.42model
S<sup>2</sup>A-NetR-101-FPN1x12.779.15model

*Note that the mAP reported here is a little different from the original paper. All results are reported on DOTA-v1.0 test set. All checkpoints here are trained with the Original version, and not compatible with the updated version.

ModelDataBackboneMSRotateLr schdbox APDownload
RetinaNetHRSC2016R-50-FPN-6x81.63cfg model log
CS<sup>2</sup>A-Net-1sHRSC2016R-50-FPN-4x84.58cfg model log
CS<sup>2</sup>A-Net-2sHRSC2016R-50-FPN-3x89.96cfg model log
S<sup>2</sup>A-NetHRSC2016R-101-FPN-3x90.00cfg model
CS<sup>2</sup>A-Net-1sDOTAR-50-FPN--1x69.06cfg model log
CS<sup>2</sup>A-Net-2sDOTAR-50-FPN--1x73.67cfg model log
S<sup>2</sup>A-NetDOTAR-50-FPN--1x74.04cfg model
CS<sup>2</sup>A-Net-2s-IoUDOTAR-50-FPN--1x74.58cfg model log

Note:

  1. All models are trained on 4 GPUs with a initial learning rate 0.01. If you train the model with fewer/more GPUs, remember to change the lr, e.g., 0.01lr=0.0025lr*4GPU, 0.0025lr=0.0025lr*1GPU, 0.02lr=0.0025lr*8GPU

  2. CS<sup>2</sup>A-Net-ns indicates Cascade S<sup>2</sup>A-Net with n stages. For more information, please refer to CASCADE_S2ANET.md

  3. IoU means IoU Loss for bbox regression.

  4. The checkpoints of S<sup>2</sup>A-Net are converted from the original version.

  5. If you cannot get access to Google Drive, BaiduYun download link can be found here with extracting code ABCD.

Installation

Please refer to install.md for installation and dataset preparation.

Getting Started

Please see getting_started.md for the basic usage of MMDetection.

Citation

@article{han2021align,  
  author={J. {Han} and J. {Ding} and J. {Li} and G. -S. {Xia}},  
  journal={IEEE Transactions on Geoscience and Remote Sensing},   
  title={Align Deep Features for Oriented Object Detection},   
  year={2021}, 
  pages={1-11},  
  doi={10.1109/TGRS.2021.3062048}}

@inproceedings{xia2018dota,
  title={DOTA: A large-scale dataset for object detection in aerial images},
  author={Xia, Gui-Song and Bai, Xiang and Ding, Jian and Zhu, Zhen and Belongie, Serge and Luo, Jiebo and Datcu, Mihai and Pelillo, Marcello and Zhang, Liangpei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={3974--3983},
  year={2018}
}

@InProceedings{Ding_2019_CVPR,
  author = {Ding, Jian and Xue, Nan and Long, Yang and Xia, Gui-Song and Lu, Qikai},
  title = {Learning RoI Transformer for Oriented Object Detection in Aerial Images},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2019}
}

@article{chen2019mmdetection,
  title={MMDetection: Open mmlab detection toolbox and benchmark},
  author={Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and Liu, Ziwei and Xu, Jiarui and others},
  journal={arXiv preprint arXiv:1906.07155},
  year={2019}
}