Home

Awesome

DeepInfrared

<!-- **This page is under construction, not finished yet.** -->

DeepInfrared aims to be an open benchmark for infrared small target detection, currently consisting of:

  1. Public infrared small target dataset (SIRST-V2);
  2. Evaluation metrics specially designed (mNoCoAP);
  3. An open source toolbox based on PyTorch (DeepInfrared).

Introduction

Single-frame InfraRed Small Target (SIRST) detection has been a challenging task due to a lack of inherent characteristics, imprecise bounding box regression, a scarcity of real-world datasets, and sensitive localization evaluation. In this paper, we propose a comprehensive solution to these challenges. First, we find that the existing anchor-free label assignment method is prone to mislabeling small targets as background, leading to their omission by detectors. To overcome this issue, we propose an all-scale pseudo-box-based label assignment scheme that relaxes the constraints on scale and decouples the spatial assignment from the size of the ground-truth target. Second, motivated by the structured prior of feature pyramids, we introduce the one-stage cascade refinement network (OSCAR), which uses the high-level head as soft proposals for the low-level refinement head. This allows OSCAR to process the same target in a cascade coarse-to-fine manner. Finally, we present a new research benchmark for infrared small target detection, consisting of the SIRST-V2 dataset of real-world, high-resolution single-frame targets, the normalized contrast evaluation metric, and the DeepInfrared toolkit for detection. We conduct extensive ablation studies to evaluate the components of OSCAR and compare its performance to state-of-the-art model-driven and data-driven methods on the SIRST-V2 benchmark. Our results demonstrate that a top-down cascade refinement framework can improve the accuracy of infrared small target detection without sacrificing efficiency.

For details see OSCAR. The speed and accuracy are listed as follows:

SIRST-V2 Dataset

As a part of the DeepInfrared Eco-system, we provide the SIRST-V2 dataset as a benchmark. SIRST-V2 is a dataset specially constructed for single-frame infrared small target detection, in which the images are selected from thousands of infrared sequences for different scenarios.

<!-- ![](https://github.com/YimianDai/open-sirst-v2/blob/master/gallery.jpg) -->

Annotation formats available:

The dataset can be downloaded here.

The DeepInfrared Toolkit

Installation

Please refer to Installation for installation instructions.

Getting Started

Train

# assume that you are under the root directory of this project,
# and you have activated your virtual environment if needed.
# and with SIRST-V2 dataset in 'data/sirst/'

python tools/train_det.py \
    configs/oscar/sota/oscar_w_noco_head_r18_caffe_fpn_p2_gn-head_1x_sirst_det2noco.py \
    --gpu-id 0 \
    --work-dir work_dirs/oscar_w_noco_head_r18_caffe_fpn_p2_gn-head_1x_sirst_det2noco

Inference

python tools/test_det.py \
    configs/oscar/sota/oscar_w_noco_head_r18_caffe_fpn_p2_gn-head_1x_sirst_det2noco.py \
    work_dirs/oscar_w_noco_head_r18_caffe_fpn_p2_gn-head_1x_sirst_det2noco/best.pth --eval "mNoCoAP"

Overview of Benchmark and Model Zoo

For your convenience, we provide the following trained models.

ModelmNoCoAPConfigLogGFLOPSDownload
faster_rcnn_r50_fpn_1x0.7141configlogbaidu
fcos_rfla_r50_kld_1x0.7882configlogbaidu
oscar_r18_fpn_p2_128_1x0.8352configlog25.36baidu
oscar_r18_fpn_p2_256_1x0.8502configlog68.32baidu

For traditional methods, e.g., low-rank based or local contrast based approaches, we provide the predicted target images:

MethodmNoCoAPDownload
LCM0.207baidu
WLDM0.112baidu
FKRW0.278baidu
IPI0.377baidu
MPCM0.322baidu
NIPPS0.335baidu
RIPT0.293baidu

Acknowledgement

Thanks MMDetection team for the wonderful open source project!

Citation

If you find DeepInfrared useful in your research, please consider citing this project.

@article{dai2022oscar,
  title={One-Stage Cascade Refinement Networks for Infrared Small Target Detection},
  author={Yimian Dai and Xiang Li and Fei Zhou and Yulei Qian and Yaohong Chen and Jian Yang},
  journal={arXiv preprint},
  year={2022}
}