Home

Awesome

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

PWC PWC

<div align="center"> <img src="https://github.com/zhiqi-li/Panoptic-SegFormer/raw/master/figs/arch.png" width="100%" height="100%"/> </div><br/>

Panoptic SegFormer is accepted by CVPR'22 and we update our latest paper on arXiv

Results

results on COCO val

BackboneMethodLr SchdPQConfigDownload
R-50Panoptic-SegFormer1x48.0configmodel
R-50Panoptic-SegFormer2x49.6configmodel
R-101Panoptic-SegFormer2x50.6configmodel
PVTv2-B5 (much lighter)Panoptic-SegFormer2x55.6configmodel
Swin-L (window size 7)Panoptic-SegFormer2x55.8configmodel

Install

Prerequisites

note: PyTorch1.8 has a bug in its adamw.py and it is solved in PyTorch1.9(see), you can easily solve it by comparing the difference.

install Panoptic SegFormer

python setup.py install 

Datasets

When I began this project, mmdet dose not support panoptic segmentation officially. I convert the dataset from panoptic segmentation format to instance segmentation format for convenience.

1. prepare data (COCO)

cd Panoptic-SegFormer
mkdir datasets
cd datasets
ln -s path_to_coco coco
mkdir annotations/
cd annotations
wget http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
unzip panoptic_annotations_trainval2017.zip

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

2. convert panoptic format to detection format

cd Panoptic-SegFormer
./tools/convert_panoptic_coco.sh coco

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017_detection_format.json
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   ├── panoptic_val2017_detection_format.json
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

Run (panoptic segmentation)

train

single-machine with 8 gpus.

./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 8

test

./tools/dist_test.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py path/to/model.pth 8

<a name="Citing"></a>Citing

If you use Panoptic SegFormer in your research, please use the following BibTeX entry.

@misc{li2021panoptic,
      title={Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers}, 
      author={Zhiqi Li and Wenhai Wang and Enze Xie and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Tong Lu and Ping Luo},
      year={2021},
      eprint={2109.03814},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

Mainly based on Defromable DETR from MMdet.

Thanks very much for other open source works: timm, Panoptic FCN, MaskFomer, QueryInst