Home

Awesome

Swin Transformer for Semantic Segmentaion

This repo contains the supported code and configuration files to reproduce semantic segmentaion results of Swin Transformer. It is based on mmsegmentaion.

Updates

05/11/2021 Models for MoBY are released

04/12/2021 Initial commits

Results and Models

ADE20K

BackboneMethodCrop SizeLr SchdmIoUmIoU (ms+flip)#paramsFLOPsconfiglogmodel
Swin-TUPerNet512x512160K44.5145.8160M945Gconfiggithub/baidugithub/baidu
Swin-SUperNet512x512160K47.6449.4781M1038Gconfiggithub/baidugithub/baidu
Swin-BUperNet512x512160K48.1349.72121M1188Gconfiggithub/baidugithub/baidu

Notes:

Results of MoBY with Swin Transformer

ADE20K

BackboneMethodCrop SizeLr SchdmIoUmIoU (ms+flip)#paramsFLOPsconfiglogmodel
Swin-TUPerNet512x512160K44.0645.5860M945Gconfiggithub/baidugithub/baidu

Notes:

Usage

Installation

Please refer to get_started.md for installation and dataset preparation.

Inference

# single-gpu testing
python tools/test.py <CONFIG_FILE> <SEG_CHECKPOINT_FILE> --eval mIoU

# multi-gpu testing
tools/dist_test.sh <CONFIG_FILE> <SEG_CHECKPOINT_FILE> <GPU_NUM> --eval mIoU

# multi-gpu, multi-scale testing
tools/dist_test.sh <CONFIG_FILE> <SEG_CHECKPOINT_FILE> <GPU_NUM> --aug-test --eval mIoU

Training

To train with pre-trained models, run:

# single-gpu training
python tools/train.py <CONFIG_FILE> --options model.pretrained=<PRETRAIN_MODEL> [model.backbone.use_checkpoint=True] [other optional arguments]

# multi-gpu training
tools/dist_train.sh <CONFIG_FILE> <GPU_NUM> --options model.pretrained=<PRETRAIN_MODEL> [model.backbone.use_checkpoint=True] [other optional arguments] 

For example, to train an UPerNet model with a Swin-T backbone and 8 gpus, run:

tools/dist_train.sh configs/swin/upernet_swin_tiny_patch4_window7_512x512_160k_ade20k.py 8 --options model.pretrained=<PRETRAIN_MODEL> 

Notes:

Citing Swin Transformer

@article{liu2021Swin,
  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
  journal={arXiv preprint arXiv:2103.14030},
  year={2021}
}

Other Links

Image Classification: See Swin Transformer for Image Classification.

Object Detection: See Swin Transformer for Object Detection.

Self-Supervised Learning: See MoBY with Swin Transformer.

Video Recognition, See Video Swin Transformer.