Home

Awesome

RDSNet

The Code for "RDSNet: A New Deep Architecture for Reciprocal Object Detection and Instance Segmentation"

This repository is based on mmdetection.

architecture

Installation

Please refer to INSTALL.md for installation and dataset preparation.

Performance on COCO

display

BackboneIterationMBRMTraining scalesAP<sup>bb</sup><br>(minival)AP<sup>m</sup><br>(minival)AP<sup>bb</sup><br>(test-dev)AP<sup>m</sup><br>(test-dev)Model
ResNet-50-FPN90kN80036.832.137.232.6Link
ResNet-50-FPN-Y-37.832.138.132.6
ResNet-101-FPN90kN80038.734.139.434.6Link
ResNet-101-FPN-Y-39.734.140.334.6
ResNet-101-FPN180kN[640, 800]40.836.240.936.4Link
ResNet-101-FPN-Y-41.836.241.836.4

The models with MBRM share the same parameters as those w/o MBRM, since the additional parameters in MBRM have been provided in the code.

Get Started

Once the installation is done, you can follow the below steps to test or train the model.

Assume that you have already prepared COCO dataset and downloaded the checkpoints to 'checkpoints/'.

A quick demo:

python tools/test.py configs/rdsnet/rdsnet_r50_fpn_1x.py \
    checkpoints/rdsnet_r50_fpn_1x-124f64c3.pth \
    --show

Config Files:

Config FileBackboneIterationTraining scalesMBRM
rdsnet_r50_fpn_1x.pyResNet-50-FPN90k800N
rdsnet_refine_r50_fpn_1x.pyResNet-50-FPN--Y
rdsnet_r101_fpn_1x.pyResNet-101-FPN90k800N
rdsnet_refine_r101_fpn_1x.pyResNet-101-FPN--Y
rdsnet_640_800_r101_fpn_2x.pyResNet-101-FPN180k[640, 800]N
rdsnet_640_800_refine_r101_fpn_2x.pyResNet-101-FPN--Y

All config files are in the folder: configs/rdsnet/

Test on COCO dataset:

# single-gpu testing
python tools/test ${CONFIG_FILE} ${CHECKPOINT_FILE} --out ${RESULT_FILE} [--eval ${EVAL_METRICS}] [--show]

# multi-gpu testing
./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} $GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}]

Arguments:

Optional arguments:

Train a model

Train with multiple GPUs

./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]

Optional arguments:

All outputs (log files and checkpoints) will be saved to the working directory, which is specified by work_dir in the config file.

*Important*: We use 4 GPUs with 2 img/gpu. The default learning rate in config files is also for 4 GPUs and 2 img/gpu (batch size= 4*2 = 8).

According to the Linear Scaling Rule, you need to set the learning rate proportional to the batch size if you use different GPUs or images per GPU, e.g., lr=0.01 for 8 GPUs * 2 img/gpu.

For some other high-level apis, please refer to GETTING_STARTED.md.

Citation

Please consider citing our paper in your publications if the project helps your research.

@misc{wang2019rdsnet,
    title={RDSNet: A New Deep Architecture for Reciprocal Object Detection and Instance Segmentation},
    author={Shaoru Wang and Yongchao Gong and Junliang Xing and Lichao Huang and Chang Huang and Weiming Hu},
    year={2019},
    eprint={1912.05070},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}