Home

Awesome

Squeeze-enhanced axial Transformer

Paper

SeaFormer: Squeeze-enhanced Axial Transformer for Mobile Semantic Segmentation,
Qiang Wan, Zilong Huang, Jiachen Lu, Gang Yu, Li Zhang
ICLR 2023

This repository contains the official implementation of SeaFormer.

SeaFormer achieves superior trade-off between performance and latency

<div align="center"> <img width="1200", src="./latency.png"> </div>

The overall architecture of Seaformer

<div align="center"> <img width="1200", src="./seaformer.png"> </div>

The schematic illustration of the SeaFormer layer

<div align="center"> <img width="1200", src="./sea_attention.png"> </div>

Model Zoo

Image Classification

Classification configs & weights see >>>here<<<.

ModelSizeAcc@1#Params (M)FLOPs (G)
SeaFormer-Tiny22468.11.80.1
SeaFormer-Small22473.44.10.2
SeaFormer-Base22476.48.70.3
SeaFormer-Large22479.914.01.2

Semantic Segmentation

Segmentation configs & weights see >>>here<<<.

MethodBackbonePretrainItersmIoU(ss)
Light HeadSeaFormer-TinyImageNet-1K160K36.5
Light HeadSeaFormer-SmallImageNet-1K160K39.4
Light HeadSeaFormer-BaseImageNet-1K160K41.9
Light HeadSeaFormer-LargeImageNet-1K160K43.8
MethodBackboneFLOPsmIoU
Light Head(h)SeaFormer-Small2.0G71.1
Light Head(f)SeaFormer-Small8.0G76.4
Light Head(h)SeaFormer-Base3.4G72.2
Light Head(f)SeaFormer-Base13.7G77.7

BibTeX

@inproceedings{wan2023seaformer,
  title     = {SeaFormer: Squeeze-enhanced Axial Transformer for Mobile Semantic Segmentation},
  author    = {Wan, Qiang and Huang, Zilong and Lu, Jiachen and Yu, Gang and Zhang, Li},
  booktitle = {International Conference on Learning Representations (ICLR)},
  year      = {2023}
}

Acknowledgment

Thanks to previous open-sourced repo:
TopFormer
mmsegmentation
pytorch-image-models