Home

Awesome

<div align="center"> <img src="assets/banner.gif"> <br> <br> Tianheng Cheng, <a href="https://xwcv.github.io/">Xinggang Wang</a><sup><span>&#8224;</span></sup>, Shaoyu Chen, Wenqiang Zhang, <a href="https://scholar.google.com/citations?user=pCY-bikAAAAJ&hl=zh-CN">Qian Zhang</a>, <a href="https://scholar.google.com/citations?user=IyyEKyIAAAAJ&hl=zh-CN">Chang Huang</a>, <a href="https://zhaoxiangzhang.net/">Zhaoxiang Zhang</a>, <a href="http://eic.hust.edu.cn/professor/liuwenyu/"> Wenyu Liu</a> </br> (<span>&#8224;</span>: corresponding author) <!-- <div><a href="">[Project Page]</a>(comming soon)</div> --> <div> <a href="https://arxiv.org/abs/2203.12827">[arXiv paper]</a> <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Cheng_Sparse_Instance_Activation_for_Real-Time_Instance_Segmentation_CVPR_2022_paper.pdf">[CVPR paper]</a> <a href="https://drive.google.com/file/d/1xhqQvQ0YVCHd8XQxnCVqef75Hey7kI-d/view?usp=sharing">[slides]</a> </div> </div>

Highlights

<div align="center"> <img src="assets/animate.gif"> <br> <br> <div>

PWC

</div> </div>

Updates

This project is under active development, please stay tuned!

Overview

SparseInst is a conceptually novel, efficient, and fully convolutional framework for real-time instance segmentation. In contrast to region boxes or anchors (centers), SparseInst adopts a sparse set of instance activation maps as object representation, to highlight informative regions for each foreground objects. Then it obtains the instance-level features by aggregating features according to the highlighted regions for recognition and segmentation. The bipartite matching compels the instance activation maps to predict objects in a one-to-one style, thus avoiding non-maximum suppression (NMS) in post-processing. Owing to the simple yet effective designs with instance activation maps, SparseInst has extremely fast inference speed and achieves 40 FPS and 37.9 AP on COCO (NVIDIA 2080Ti), significantly outperforms the counter parts in terms of speed and accuracy.

<center> <img src="./assets/sparseinst.png"> </center>

Models

We provide two versions of SparseInst, i.e., the basic IAM (3x3 convolution) and the Group IAM (G-IAM for short), with different backbones. All models are trained on MS-COCO train2017.

Fast models

modelbackboneinputaugAP<sup>val</sup>APFPSweights
SparseInstR-5064032.833.244.3model
SparseInstR-50-vd64034.134.542.6model
SparseInst (G-IAM)R-5060833.434.044.6model
SparseInst (G-IAM, Softmax)R-5060833.6-44.6model
SparseInst (G-IAM)R-5060834.234.744.6model
SparseInst (G-IAM)R-50-DCN60836.436.841.6model
SparseInst (G-IAM)R-50-vd60835.636.142.8model
SparseInst (G-IAM)R-50-vd-DCN60837.437.940.0model
SparseInst (G-IAM)R-50-vd-DCN64037.738.139.3model

SparseInst with other backbones

modelbackboneinputAP<sup>val</sup>APFPSweights
SparseInst (G-IAM)CSPDarkNet64035.1--model

Larger models

modelbackboneinputaugAP<sup>val</sup>APFPSweights
SparseInst (G-IAM)R-10164034.935.5-model
SparseInst (G-IAM)R-101-DCN64036.436.9-model

SparseInst with Vision Transformers

modelbackboneinputaugAP<sup>val</sup>APFPSweights
SparseInst (G-IAM)PVTv2-B164035.336.033.5 (48.9<sup></sup>)model
SparseInst (G-IAM)PVTv2-B2-li64037.238.226.5model

<sup></sup>: measured on RTX 3090.

Note:

Installation and Prerequisites

This project is built upon the excellent framework detectron2, and you should install detectron2 first, please check official installation guide for more details.

Updates: SparseInst works well on detectron2-v0.6.

Note: previously, we mainly use v0.3 of detectron2 for experiments and evaluations. Besides, we also test our code on the newest version v0.6. If you find some bugs or incompatibility problems of higher version of detectron2, please feel free to raise a issue!

Install the detectron2:

git clone https://github.com/facebookresearch/detectron2.git
# if you swith to a specific version, e.g., v0.3 (recommended) or v0.6
git checkout tags/v0.6
# build detectron2
python setup.py build develop

Getting Start

🔥 SparseInst with FP16

SparseInst with FP16 achieves 30% faster inference speed and saves much training memory, we provide some comparisons about the memory, inference speed, and training speed in the below table.

FP16train mem.(log)train mem.(nvidia-smi)train speedinfer. speed
6.0G10.5G0.8690s/iter52.17 FPS
3.9G6.8G0.6949s/iter67.57 FPS

Note: statistics are measured on NVIDIA 3090. With FP16, we have faster training speed and can also increase the batch size for better performance.

python tools/train_net.py --config-file configs/sparse_inst_r50_giam_fp16.yaml --num-gpus 8 SOLVER.AMP.ENABLED True
python tools/test_net.py --config-file configs/sparse_inst_r50_giam_fp16.yaml --fp16 MODEL.WEIGHTS model_final.pth 

Testing SparseInst

Before testing, you should specify the config file <CONFIG> and the model weights <MODEL-PATH>. In addition, you can change the input size by setting the INPUT.MIN_SIZE_TEST in both config file or commandline.

python tools/train_net.py --config-file <CONFIG> --num-gpus <GPUS> --eval MODEL.WEIGHTS <MODEL-PATH>
# example:
python tools/train_net.py --config-file configs/sparse_inst_r50_giam.yaml --num-gpus 8 --eval MODEL.WEIGHTS sparse_inst_r50_giam_aug_2b7d68.pth
python tools/test_net.py --config-file <CONFIG> MODEL.WEIGHTS <MODEL-PATH> INPUT.MIN_SIZE_TEST 512
# example:
python tools/test_net.py --config-file configs/sparse_inst_r50_giam.yaml MODEL.WEIGHTS sparse_inst_r50_giam_aug_2b7d68.pth INPUT.MIN_SIZE_TEST 512

Note:

FLOPs and Parameters

The get_flops.py is built based on detectron2 and fvcore.

python tools/get_flops.py --config-file <CONFIG> --tasks parameter flop

Visualizing Images with SparseInst

To inference or visualize the segmentation results on your images, you can run:

python demo.py --config-file <CONFIG> --input <IMAGE-PATH> --output results --opts MODEL.WEIGHTS <MODEL-PATH>
# example
python demo.py --config-file configs/sparse_inst_r50_giam.yaml --input datasets/coco/val2017/* --output results --opt MODEL.WEIGHTS sparse_inst_r50_giam_aug_2b7d68.pth INPUT.MIN_SIZE_TEST 512
<div> <table align="center"> <td><img src="assets/figures/000000006471.jpg" height=200></td> <td><img src="assets/figures/000000014439.jpg" height=200></td> </table> <span><p align="center">Visualization results (SparseInst-R50-GIAM)</p></span> </div>

Training SparseInst

To train the SparseInst model on COCO dataset with 8 GPUs. 8 GPUs are required for the training. If you only have 4 GPUs or GPU memory is limited, it doesn't matter and you can reduce the batch size through SOLVER.IMS_PER_BATCH or reduce the input size. If you adjust the batch size, learning schedule should be adjusted according to the linear scaling rule.

python tools/train_net.py --config-file <CONFIG> --num-gpus 8 
# example
python tools/train_net.py --config-file configs/sparse_inst_r50vd_dcn_giam_aug.yaml --num-gpus 8
<!-- ### ONNX Export -->

Custom Training of SparseInst

  1. We suggest you convert your custom datasets into the COCO format, which enables the usage of the default dataset mappers and loaders. You may find more details in the official guide of detectron2.
  2. You need to check whether NUM_CLASSES and NUM_MASKS should be changed according to your scenarios or tasks.
  3. Change the configurations accordingly.
  4. After finishing the above procedures, you can easily train SparseInst by train_net.py.

Acknowledgements

SparseInst is based on detectron2, OneNet, DETR, and timm, and we sincerely thanks for their code and contribution to the community!

Citing SparseInst

If you find SparseInst is useful in your research or applications, please consider giving us a star 🌟 and citing SparseInst by the following BibTeX entry.

@inproceedings{Cheng2022SparseInst,
  title     =   {Sparse Instance Activation for Real-Time Instance Segmentation},
  author    =   {Cheng, Tianheng and Wang, Xinggang and Chen, Shaoyu and Zhang, Wenqiang and Zhang, Qian and Huang, Chang and Zhang, Zhaoxiang and Liu, Wenyu},
  booktitle =   {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
  year      =   {2022}
}

License

SparseInst is released under the MIT Licence.