Home

Awesome

Dense Nested Attention Network for Infrared Small Target Detection

Good News! Our paper has been accepted by IEEE Transaction on Image Processing. Our team will release more interesting works and applications on SIRST soon. Please keep following our repository.

outline

Algorithm Introduction

Dense Nested Attention Network for Infrared Small Target Detection, Boyang Li, Chao Xiao, Longguang Wang, and Yingqian Wang, arxiv 2021 [Paper]

We propose a dense nested attention network (DNANet) to achieve accurate single-frame infrared small target detection and develop an open-sourced infrared small target dataset (namely, NUDT-SIRST) in this paper. Experiments on both public (e.g., NUAA-SIRST, NUST-SIRST) and our self-developed datasets demonstrate the effectiveness of our method. The contribution of this paper are as follows:

  1. We propose a dense nested attention network (namely, DNANet) to maintain small targets in deep layers.

  2. An open-sourced dataset (i.e., NUDT-SIRST) with rich targets.

  3. Performing well on all existing SIRST datasets.

Dataset Introduction

NUDT-SIRST is a synthesized dataset, which contains 1327 images with resolution of 256x256. The advantage of synthesized dataset compared to real dataset lies in three aspets:

  1. Accurate annotations.

  2. Massive generation with low cost (i.e., time and money).

  3. Numerous categories of target, rich target sizes, diverse clutter backgrounds.

Citation

If you find the code useful, please consider citing our paper using the following BibTeX entry.

@article{DNANet,
  title={Dense nested attention network for infrared small target detection},
  author={Li, Boyang and Xiao, Chao and Wang, Longguang and Wang, Yingqian and Lin, Zaiping and Li, Miao and An, Wei and Guo, Yulan},
  journal={IEEE Transactions on Image Processing},
  year={2023},
  volume={32},
  pages={1745-1758},
  publisher={IEEE}
}

Prerequisite

Usage

On windows:

Click on train.py and run it. 

On Ubuntu:

1. Train.

python train.py --base_size 256 --crop_size 256 --epochs 1500 --dataset [dataset-name] --split_method 50_50 --model [model name] --backbone resnet_18  --deep_supervision True --train_batch_size 16 --test_batch_size 16 --mode TXT

2. Test.

python test.py --base_size 256 --crop_size 256 --st_model [trained model path] --model_dir [model_dir] --dataset [dataset-name] --split_method 50_50 --model [model name] --backbone resnet_18  --deep_supervision True --test_batch_size 1 --mode TXT 

(Optional 1) Visulize your predicts.

python visulization.py --base_size 256 --crop_size 256 --st_model [trained model path] --model_dir [model_dir] --dataset [dataset-name] --split_method 50_50 --model [model name] --backbone resnet_18  --deep_supervision True --test_batch_size 1 --mode TXT 

(Optional 2) Test and visulization.

python test_and_visulization.py --base_size 256 --crop_size 256 --st_model [trained model path] --model_dir [model_dir] --dataset [dataset-name] --split_method 50_50 --model [model name] --backbone resnet_18  --deep_supervision True --test_batch_size 1 --mode TXT 

(Optional 3) Demo (with your own IR image).

python demo.py --base_size 256 --crop_size 256 --img_demo_dir [img_demo_dir] --img_demo_index [image_name]  --model [model name] --backbone resnet_18  --deep_supervision True --test_batch_size 1 --mode TXT  --suffix [img_suffix]

Results and Trained Models

Qualitative Results

outline

Quantative Results

on NUDT-SIRST

ModelmIoU (x10(-2))Pd (x10(-2))Fa (x10(-6))
DNANet-VGG-1085.2396.956.782
DNANet-ResNet-1086.3697.396.897
DNANet-ResNet-1887.0998.734.223
DNANet-ResNet-1888.6198.424.30[Weights]
DNANet-ResNet-3486.8797.983.710

on NUAA-SIRST

ModelmIoU (x10(-2))Pd (x10(-2))Fa (x10(-6))
DNANet-VGG-1074.9697.3426.73
DNANet-ResNet-1076.2497.7112.80
DNANet-ResNet-1877.4798.482.353
DNANet-ResNet-1879.2698.482.30[Weights]
DNANet-ResNet-3477.5498.102.510

on NUST-SIRST

ModelmIoU (x10(-2))Pd (x10(-2))Fa (x10(-6))
DNANet-ResNet-1846.7381.2933.87[Weights]

*This code is highly borrowed from ACM. Thanks to Yimian Dai.

*The overall repository style is highly borrowed from PSA. Thanks to jiwoon-ahn.

Referrences

  1. Dai Y, Wu Y, Zhou F, et al. Asymmetric contextual modulation for infrared small target detection[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021: 950-959. [code]

  2. Zhou Z, Siddiquee M M R, Tajbakhsh N, et al. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE transactions on medical imaging, 2019, 39(6): 1856-1867. [code]

  3. He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. [code]