Home

Awesome

CAT: Cross Attention in Vision Transformer

This is official implement of "CAT: Cross Attention in Vision Transformer".

Abstract

Since Transformer has found widespread use in NLP, the potential of Transformer in CV has been realized and has inspired many new approaches. However, the computation required for replacing word tokens with image patches for Transformer after the tokenization of the image is vast(e.g., ViT), which bottlenecks model training and inference. In this paper, we propose a new attention mechanism in Transformer termed Cross Attention, which alternates attention inner the image patch instead of the whole image to capture local information and apply attention between image patches which are divided from single-channel feature maps to capture global information. Both operations have less computation than standard self-attention in Transformer. By alternately applying attention inner patch and between patches, we implement cross attention to maintain the performance with lower computational cost and build a hierarchical network called Cross Attention Transformer(CAT) for other vision tasks. Our base model achieves state-of-the-arts on ImageNet-1K, and improves the performance of other methods on COCO and ADE20K, illustrating that our network has the potential to serve as general backbones.

CAT achieves strong performance on COCO object detection(implemented with mmdectection) and ADE20K semantic segmentation(implemented with mmsegmantation).

architecture

NOTE: All pretrained-models and logs are released at releases.

Pretrained Models and Results on ImageNet-1K

nameresolutionacc@1acc@5#paramsFLOPsThroughputmodellog
CAT-T<sup>*</sup>224x22480.395.017M2.8G857 imgs/s--
CAT-S<sup>*</sup>224x22481.895.637M5.9G525 imgs/s--
CAT-B<sup>*</sup>224x22482.896.152M8.9G384 imgs/s--
CAT-T-v2224x22481.795.536M3.9GComingComingComing

Note: <sup>*</sup> indicates new version of model and log. Throughput is evaluated on a V100 GPU.

Models and Results on Object Detection (COCO 2017 val)

BackboneMethodpretrainLr Schdbox mAPmask mAP#paramsFLOPsmodellog
CAT-SMask R-CNN<sup>+</sup>ImageNet-1K1x41.638.657M295G--
CAT-BMask R-CNN<sup>+</sup>ImageNet-1K1x41.838.771M356G--
CAT-SFCOSImageNet-1K1x40.0-45M245G--
CAT-BFCOSImageNet-1K1x41.0-59M303G--
CAT-SATSSImageNet-1K1x42.0-45M243G--
CAT-BATSSImageNet-1K1x42.5-59M303G--
CAT-SRetinaNetImageNet-1K1x40.1-47M276G--
CAT-BRetinaNetImageNet-1K1x41.4-62M337G--
CAT-SCascade R-CNNImageNet-1K1x44.1-82M270G--
CAT-BCascade R-CNNImageNet-1K1x44.8-96M330G--
CAT-SCascade R-CNN<sup>+</sup>ImageNet-1K1x45.2-82M270G--
CAT-BCascade R-CNN<sup>+</sup>ImageNet-1K1x46.3-96M330G--

Note: <sup>+</sup> indicates multi-scale training.

Models and Results on Semantic Segmentation (ADE20K val)

BackboneMethodpretrainCrop SizeLr SchdmIoUmIoU (ms+flip)#paramsFLOPsmodellog
CAT-SSemantic FPNImageNet-1K512x51280K40.642.141M214G--
CAT-BSemantic FPNImageNet-1K512x51280K42.243.655M276G--
CAT-SSemantic FPNImageNet-1K512x512160K42.242.841M214G--
CAT-BSemantic FPNImageNet-1K512x512160K43.244.955M276G--

Citing CAT

You can cite the paper as:

@article{lin2021cat,
  title={CAT: Cross Attention in Vision Transformer},
  author={Hezheng Lin and Xing Cheng and Xiangyu Wu and Fan Yang and Dong Shen and Zhongyuan Wang and Qing Song and Wei Yuan},
  journal={arXiv preprint arXiv:2106.05786},
  year={2021}
}

Started

Please refer to get_started.

Acknowledgement

Our implementation is mainly based on Swin.