Home

Awesome

VoVNet-v2 backbone networks in Detectron2

Efficient Backbone Network for Object Detection and Segmentation

[CenterMask(code)][CenterMask2(code)] [VoVNet-v1(arxiv)] [VoVNet-v2(arxiv)] [BibTeX]

<div align="center"> <img src="https://dl.dropbox.com/s/jgi3c5828dzcupf/osa_updated.jpg" width="700px" /> </div>

In this project, we release code for VoVNet-v2 backbone network (introduced by CenterMask) in detectron2 as a extention form. VoVNet can extract diverse feature representation efficiently by using One-Shot Aggregation (OSA) module that concatenates subsequent layers at once. Since the OSA module can capture multi-scale receptive fields, the diversifed feature maps allow object detection and segmentation to address multi-scale objects and pixels well, especially robust on small objects. VoVNet-v2 improves VoVNet-v1 by adding identity mapping that eases the optimization problem and effective SE (Squeeze-and-Excitation) that enhances the diversified feature representation.

Highlight

Compared to ResNe(X)t backbone

Update

Results on MS-COCO in Detectron2

Note

We measure the inference time of all models with batch size 1 on the same V100 GPU machine.
We train all models using V100 8GPUs.

Faster R-CNN

Lightweight-VoVNet with FPNLite

BackboneParam.lr schedinference timeAPAPsAPmAPldownload
MobileNetV23.5M3x0.02233.019.035.043.4<a href="https://dl.dropbox.com/s/q4iceofvlcu207c/faster_mobilenetv2_FPNLite_ms_3x.pth">model</a> | <a href="https://dl.dropbox.com/s/tz60e7rtnbsrdgd/faster_mobilenetv2_FPNLite_ms_3x_metrics.json">metrics</a>
V2-1911.2M3x0.03438.924.841.749.3<a href="https://www.dropbox.com/s/u5pvmhc871ohvgw/fast_V_19_eSE_FPNLite_ms_3x.pth?dl=1">model</a> | <a href="https://www.dropbox.com/s/riu7hkgzlmnndhc/fast_V_19_eSE_FPNLite_ms_3x_metrics.json">metrics</a>
V2-19-DW6.5M3x0.02736.722.740.046.0<a href="https://www.dropbox.com/s/7h6zn0owumucs48/faster_rcnn_V_19_eSE_dw_FPNLite_ms_3x.pth?dl=1">model</a> | <a href="https://www.dropbox.com/s/627hf4h1m485926/faster_rcnn_V_19_eSE_dw_FPNLite_ms_3x_metrics.json">metrics</a>
V2-19-Slim3.1M3x0.02335.221.737.344.4<a href="https://www.dropbox.com/s/yao1i32zdylx279/faster_rcnn_V_19_eSE_slim_FPNLite_ms_3x.pth?dl=1">model</a> | <a href="https://www.dropbox.com/s/jrgxltneki9hk84/faster_rcnn_V_19_eSE_slim_FPNLite_ms_3x_metrics.json">metrics</a>
V2-19-Slim-DW1.8M3x0.02232.419.134.641.8<a href="https://www.dropbox.com/s/blpjx3iavrzkygt/faster_rcnn_V_19_eSE_slim_dw_FPNLite_ms_3x.pth?dl=1">model</a> | <a href="https://www.dropbox.com/s/3og68zhq2ubr7mu/faster_rcnn_V_19_eSE_slim_dw_FPNLite_ms_3x_metrics.json">metrics</a>
BackboneParam.lr schedinference timeAPAPsAPmAPldownload
V2-19-FPN37.6M3x0.04038.924.941.548.8<a href="https://www.dropbox.com/s/1rfvi6vzx45z6y5/faster_V_19_eSE_ms_3x.pth?dl=1">model</a> | <a href="https://dl.dropbox.com/s/dq7406vo22wjxgi/faster_V_19_eSE_ms_3x_metrics.json">metrics</a>
R-50-FPN51.2M3x0.04740.224.243.552.0<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl">model</a> | <a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/metrics.json">metrics</a>
V2-39-FPN52.6M3x0.04742.727.145.654.0<a href="https://dl.dropbox.com/s/dkto39ececze6l4/faster_V_39_eSE_ms_3x.pth">model</a> | <a href="https://dl.dropbox.com/s/dx9qz1dn65ccrwd/faster_V_39_eSE_ms_3x_metrics.json">metrics</a>
R-101-FPN70.1M3x0.06342.025.245.654.6<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x/138205316/model_final_a3ec72.pkl">model</a> | <a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x/138205316/metrics.json">metrics</a>
V2-57-FPN68.9M3x0.05443.327.546.755.3<a href="https://dl.dropbox.com/s/c7mb1mq10eo4pzk/faster_V_57_eSE_ms_3x.pth">model</a> | <a href="https://dl.dropbox.com/s/3tsn218zzmuhyo8/faster_V_57_eSE_metrics.json">metrics</a>
X-101-FPN114.3M3x0.12043.027.246.154.9<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x/139653917/model_final_2d9806.pkl">model</a> | <a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x/139653917/metrics.json">metrics</a>
V2-99-FPN96.9M3x0.07344.128.147.056.4<a href="https://dl.dropbox.com/s/v64mknwzfpmfcdh/faster_V_99_eSE_ms_3x.pth">model</a> | <a href="https://dl.dropbox.com/s/zvaz9s8gvq2mhrd/faster_V_99_eSE_ms_3x_metrics.json">metrics</a>

Mask R-CNN

Backbonelr schedinference timebox APbox APsbox APmbox APlmask APmask APsmask APmmask APldownload
V2-19-FPNLite3x0.03639.725.142.650.836.419.938.850.8<a href="https://www.dropbox.com/s/h1khv9l7quakvz0/mask_V_19_eSE_FPNLite_ms_3x.pth?dl=1">model</a> | <a href="https://www.dropbox.com/s/8fophrb1f1mf9ih/mask_V_19_eSE_FPNLite_ms_3x_metrics.json">metrics</a>
V2-19-FPN3x0.04440.125.443.051.036.619.738.751.2<a href="https://www.dropbox.com/s/dyeyuag5va96tqo/mask_V_19_eSE_ms_3x.pth?dl=1">model</a> | <a href="https://dl.dropbox.com/s/0y0q97gi8u8kq2n/mask_V_19_eSE_ms_3x_metrics.json">metrics</a>
R-50-FPN3x0.05541.024.943.953.337.218.639.553.3<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl">model</a> | <a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/metrics.json">metrics</a>
V2-39-FPN3x0.05243.827.647.255.339.321.441.854.6<a href="https://dl.dropbox.com/s/c5o3yr6lwrb1170/mask_V_39_eSE_ms_3x.pth">model</a> | <a href="https://dl.dropbox.com/s/21xqlv1ofn7oa1z/mask_V_39_eSE_metrics.json">metrics</a>
R-101-FPN3x0.07042.926.446.656.138.619.541.355.3<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x/138205316/model_final_a3ec72.pkl">model</a> | <a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x/138205316/metrics.json">metrics</a>
V2-57-FPN3x0.05844.228.247.256.839.721.642.255.6<a href="https://dl.dropbox.com/s/aturknfroupyw92/mask_V_57_eSE_ms_3x.pth">model</a> | <a href="https://dl.dropbox.com/s/8sdek6hkepcu7na/mask_V_57_eSE_metrics.json">metrics</a>
X-101-FPN3x0.12944.327.547.656.739.520.742.056.5<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x/139653917/model_final_2d9806.pkl">model</a> | <a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x/139653917/metrics.json">metrics</a>
V2-99-FPN3x0.07644.928.548.157.740.321.742.856.6<a href="https://dl.dropbox.com/s/qx45cnv718k4zmn/mask_V_99_eSE_ms_3x.pth">model</a> | <a href="https://dl.dropbox.com/s/u1sav8deha47odp/mask_V_99_eSE_metrics.json">metrics</a>

Panoptic-FPN on COCO

<!-- ./gen_html_table.py --config 'COCO-PanopticSegmentation/*50*' 'COCO-PanopticSegmentation/*101*' --name R50-FPN R50-FPN R101-FPN --fields lr_sched train_speed inference_speed mem box_AP mask_AP PQ --> <table><tbody> <!-- START TABLE --> <!-- TABLE HEADER --> <th valign="bottom">Name</th> <th valign="bottom">lr<br/>sched</th> <th valign="bottom">inference<br/>time<br/>(s/im)</th> <th valign="bottom">box<br/>AP</th> <th valign="bottom">mask<br/>AP</th> <th valign="bottom">PQ</th> <th valign="bottom">download</th> <!-- TABLE BODY --> <!-- ROW: panoptic_fpn_R_50_3x --> <tr><td align="left">R-50-FPN</td> <td align="center">3x</td> <td align="center">0.063</td> <td align="center">40.0</td> <td align="center">36.5</td> <td align="center">41.5</td> <td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_50_3x/139514569/model_final_c10459.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_50_3x/139514569/metrics.json">metrics</a></td> </tr> <!-- ROW: panoptic_fpn_V_39_3x --> <tr><td align="left">V2-39-FPN</td> <td align="center">3x</td> <td align="center">0.063</td> <td align="center">42.8</td> <td align="center">38.5</td> <td align="center">43.4</td> <td align="center"><a href="https://www.dropbox.com/s/fnr9r4arv0cbfbf/panoptic_V_39_eSE_3x.pth?dl=1">model</a>&nbsp;|&nbsp;<a href="https://dl.dropbox.com/s/vftfukrjuu7w1ao/panoptic_V_39_eSE_3x_metrics.json">metrics</a></td> </tr> <!-- ROW: panoptic_fpn_R_101_3x --> <tr><td align="left">R-101-FPN</td> <td align="center">3x</td> <td align="center">0.078</td> <td align="center">42.4</td> <td align="center">38.5</td> <td align="center">43.0</td> <td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x/139514519/model_final_cafdb1.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x/139514519/metrics.json">metrics</a></td> </tr> <!-- ROW: panoptic_fpn_V_57_3x --> <tr><td align="left">V2-57-FPN</td> <td align="center">3x</td> <td align="center">0.070</td> <td align="center">43.4</td> <td align="center">39.2</td> <td align="center">44.3</td> <td align="center"><a href="https://www.dropbox.com/s/zhoqx5rvc0jj0oa/panoptic_V_57_eSE_3x.pth?dl=1">model</a>&nbsp;|&nbsp;<a href="https://dl.dropbox.com/s/20hwrmru15dilre/panoptic_V_57_eSE_3x_metrics.json">metrics</a></td> </tr> </tbody></table>

Using this command with --num-gpus 1

python /path/to/vovnet-detectron2/train_net.py --config-file /path/to/vovnet-detectron2/configs/<config.yaml> --eval-only --num-gpus 1 MODEL.WEIGHTS <model.pth>

Installation

As this vovnet-detectron2 is implemented as a extension form (detectron2/projects) upon detectron2, you just install detectron2 following INSTALL.md.

Prepare for coco dataset following this instruction.

Training

ImageNet Pretrained Models

We provide backbone weights pretrained on ImageNet-1k dataset.

To train a model, run

python /path/to/vovnet-detectron2/train_net.py --config-file /path/to/vovnet-detectron2/configs/<config.yaml>

For example, to launch end-to-end Faster R-CNN training with VoVNetV2-39 backbone on 8 GPUs, one should execute:

python /path/to/vovnet-detectron2/train_net.py --config-file /path/to/vovnet-detectron2/configs/faster_rcnn_V_39_FPN_3x.yaml --num-gpus 8

Evaluation

Model evaluation can be done similarly:

python /path/to/vovnet-detectron2/train_net.py --config-file /path/to/vovnet-detectron2/configs/faster_rcnn_V_39_FPN_3x.yaml --eval-only MODEL.WEIGHTS <model.pth>

TODO

<a name="CitingVoVNet"></a>Citing VoVNet

If you use VoVNet, please use the following BibTeX entry.

@inproceedings{lee2019energy,
  title = {An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection},
  author = {Lee, Youngwan and Hwang, Joong-won and Lee, Sangrok and Bae, Yuseok and Park, Jongyoul},
  booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
  year = {2019}
}

@article{lee2019centermask,
  title={CenterMask: Real-Time Anchor-Free Instance Segmentation},
  author={Lee, Youngwan and Park, Jongyoul},
  booktitle={CVPR},
  year={2020}
}