Home

Awesome

DRENet: Fast and Accurate Tiny Ship detection Method

GitHub stars

Share us a :star: if this repo does help

This repo is the official implementation of the DRENet in "A Degraded Reconstruction Enhancement-based Method for Tiny Ship Detection in Remote Sensing Images with A New Large-scale Dataset". The paper can be accessed in [IEEE | Lab Server | ResearchGate]. (Accepted by TGRS 2022)

If you encounter any question, please feel free to contact us. You can create an issue or just send email to me windvchen@gmail.com. Also welcome for any idea exchange and discussion.

Updates

05/30/2023

Thanks for the discussions in #10. The current code now supports processing different resolutions without manually modifying the yaml configurations. To achieve this, you can comment out L139-141 in common.py. (Note that this feature has not well been tested, if there is a drop in performance, you can get back to the manual modification mentioned in #4).

11/19/2022

We have improved the previous codes, and the current repository support directly detect the images without generating degraded images, which correspond to the file detect.py.

06/10/2022

The code cleanup is finished and the complete codes are provided, also the weights of our model on LEVIR-Ship dataset.

06/06/2022

We will finish the code cleanup within a week, and make both the code and dataset fully public. Please be patient.

Table of Contents

Introduction

Our Network Structure

We focus on tiny ship detection task in medium-resolution (MR, about 16m/pixel) remote sensing (RS) images . Compared with high-resolution (HR) RS image, an MR image covers a much wider area, thus facilitating quick ship detection. This direction has great research significance, and can greatly benefit the rapid ship detection under massive RS images.

For the task, we propose an effective Degraded Reconstruction Enhancement Network (DRENet), where a degraded reconstruction enhancer is designed to learn to regress an object-aware blurred version of the input image. Our method achieves both great effectiveness and efficiency, and outperforms many recent methods.

Results and Trained Model

Models trained on LEVIR-Ship dataset

MethodsParams(M)FLOPs(G)APFPS
YOLOv361.5299.269.961
YOLOv5s7.0510.475.6 [Google Drive <br /> | Baidu Pan (code:ogdm)]95
Retinanet36.33104.474.912
SSD24.39175.252.625
FasterRCNN136.70299.270.810
EfficientDet-D03.844.671.332
EfficientDet-D28.0120.0<ins>80.9</ins>21
FCOS5.9251.875.537
CenterNet191.24584.677.725
HSFNet157.59538.173.67
ImYOLOv362.86101.972.651
MaskRCNN+DFR+RFE24.99237.876.26
DRENet<ins>4.79</ins><ins>8.3</ins>82.4 [Google Drive <br /> | Baidu Pan (code:x710)]<ins>85</ins>

Preliminaries

Please at first download dataset LEVIR-Ship, then prepare the dataset as the following structure:

├── train
        ├── images
            ├── img_1.png
            ├── img_2.png
            ├── ...
        ├── degrade  
        # images processed by Selective Degradation (refer to our paper for detals)
            ├── degraded_img_1.png
            ├── degraded_img_2.png
            ├── ...
        ├── labels
            ├── label_1.txt
            ├── label_2.txt
            ├── ...
├── val
├── test

Note that apart from the images and labels in LEVIR-Ship dataset, you should also generate the degraded images, which are the supervision of the enhancer (see details in our paper). Here, we provide DegradeGenerate.py to easily generate the degraded images.

After preparing the dataset as above, change the paths in ship.yaml.

(The partitioned dataset, including the degraded images, can all be accessed here)

Environments

Run Details

Train Process

To train our DRENet, run:

python train.py --cfg "./models/DRENet.yaml" --epochs 1000 --workers 8 --batch-size 16 --device 0 --project "./LEVIR-Ship" --data "./data/ship.yaml"

Parameters Description

Others

The current codes use fixed weight balance, which can also achieve a good result.

If you want to make use of automatic weight balance, please search the key word weightOptimizer in train.py and uncomment the code lines, also the code lines with the key word ForAuto in loss.py be uncommented and the other lines be commented out.

Valid Process

To evaluate our DRENet, you should first train the network or download our provided weights, then run:

python test.py --weights "./DRENet.pt" --project "runs/test" --device 0 --batch-size 16 --data "./data/ship.yaml"

You can set how many detected results to plot by changing the value of plot_batch_num in test.py. Also ensure that you have changed the val path in ship.yaml into your test path.

Please ensure that there are corresponding degraded images in the degrade folder. (See #issue 4 for more details.)

Detect Process

To directly output the detect results without the need of the degraded images, please run the following command:

python detect.py --weights "./DRENet.pt" --source "images/" --device 0

where "--source" is the path that the images need detection in.

Citation

If you find this paper useful in your research, please consider citing:

@ARTICLE{9791363,
  author={Chen, Jianqi and Chen, Keyan and Chen, Hao and Zou, Zhengxia and Shi, Zhenwei},
  journal={IEEE Transactions on Geoscience and Remote Sensing},
  title={A Degraded Reconstruction Enhancement-based Method for Tiny Ship Detection in Remote Sensing Images with A New Large-scale Dataset},
  year={2022},
  volume={60},
  number={},
  pages={1-14},
  doi={10.1109/TGRS.2022.3180894}}

License

This project is licensed under the GPL-3.0 License. See LICENSE for details