Home

Awesome

[ICCV 2023] Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning

:loudspeaker: Introduction

This is the official implementation of our paper titled "Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning", which has been accepted by ICCV 2023 and the preprint version has been submitted to arXiv.

:ferris_wheel: Dependencies

:open_file_folder: Datasets

Our work is based on the large-scale small object detection benchmark SODA, which comprises two sub datasets SODA-D and SODA-A. Download the dataset(s) from corresponding links below.

The data preparation for SODA differs slightly from that of conventional object detection datasets, as it requires the initial step of splitting the original images. Srcipts to obtain sub-images of SODA-D can be found at tools/img_split. For SODA-A, please refer to SODA-mmrotate. More details about SODA please refer to the Dataset Homepage.

<!-- and SODA-A can be found at [SODA-mmdetection](https://github.com/shaunyuan22/SODA-mmdetection) and --> <!-- Moreover, this repository is build on MMDetection and MMrotate, please refer to [SODA-mmdetection](https://github.com/shaunyuan22/SODA-mmdetection) and [SODA-mmrotate](https://github.com/shaunyuan22/SODA-mmrotate) for the preparation of corresponding environment. -->

🛠️ Install

This repository is build on MMDetection 2.26.0 which can be installed by running the following scripts. Please ensure that all dependencies have been satisfied before setting up the environment.

git clone https://github.com/shaunyuan22/CFINet
cd CFINet
pip install -v -e .

Moreover, please refer to SODA-mmrotate for MMRotate installation if you want to perform evaluation on the SODA-A dataset.

🚀 Training

python ./tools/train.py ${CONFIG_FILE} 
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM}

📈 Evaluation

python ./tools/test.py ${CONFIG_FILE} ${WORK_DIR} --eval bbox
bash ./tools/dist_test.sh ${CONFIG_FILE} ${WORK_DIR} ${GPU_NUM} --eval bbox

:trophy: Result

Result on SODA-D

MethodSchedule$AP$$AP_{50}$$AP_{75}$$AP_{eS}$$AP_{rS}$$AP_{gS}$$AP_N$
RetinaNet$1 \times$28.257.623.711.925.234.144.2
FCOS$1 \times$23.949.519.96.919.430.940.9
RepPoints$1 \times$28.055.624.710.123.835.145.3
ATSS$1 \times$26.855.622.111.723.932.241.3
YOLOX$70e$26.753.423.013.625.130.930.4
CornerNet$2 \times$24.649.521.76.520.532.243.8
CenterNet$70e$21.548.815.65.116.229.642.4
Deformable-DETR$50e$19.244.813.76.315.424.934.2
Sparse RCNN$1 \times$24.250.320.38.820.430.239.4
Faster RCNN$1 \times$28.959.424.113.825.734.543.0
Cascade RPN$1 \times$29.156.525.912.525.535.444.7
RFLA$1 \times$29.760.225.213.226.935.444.6
Ours$1 \times$30.760.826.714.727.836.444.6

Result on SODA-A

MethodSchedule$AP$$AP_{50}$$AP_{75}$$AP_{eS}$$AP_{rS}$$AP_{gS}$$AP_N$
Rotated RetinaNet$1 \times$26.863.416.29.122.035.428.2
$S^2$ A-Net$1 \times$28.369.613.110.222.835.829.5
Oriented RepPoints$1 \times$26.358.819.09.422.632.428.5
DHRec$1 \times$30.168.819.810.624.640.334.6
Rotated Faster RCNN$1 \times$32.570.124.311.927.342.234.4
Gliding Vertex$1 \times$31.770.822.611.727.041.133.8
Oriented RCNN$1 \times$34.470.728.612.528.644.536.7
DODet$1 \times$31.668.123.411.326.341.033.5
Ours$1 \times$34.473.126.113.529.344.035.9

📚 References

Please cite our work if you find our work and codes helpful for your research.

@article{cfinet,
  title={Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning},
  author={Yuan, Xiang and Cheng, Gong and Yan, Kebing and Zeng, Qinghua and Han, Junwei},
  journal={arXiv preprint arXiv:2308.09534},
  year={2023}
}

:e-mail: Contact

If you have any problems about this repo or SODA benchmark, please be free to contact us at shaunyuan@mail.nwpu.edu.cn 😉