Home

Awesome

<h1 align="center">[TPAMI 2023] Vision Transformer with Quadrangle Attention<a href="https://arxiv.org/abs/2303.15105"><img src="https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg" ></a></h1> <p align="center"> <h4 align="center">This is the official repository of the paper <a href="https://arxiv.org/abs/2303.15105">Vision Transformer with Quadrangle Attention</a>.</h4> <h5 align="center"><em>Qiming Zhang, Jing Zhang, Yufei Xu, and Dacheng Tao</em></h5> <p align="center"> <a href="#news">News</a> | <a href="#abstract">Abstract</a> | <a href="#method">Method</a> | <a href="#usage">Usage</a> | <a href="#results">Results</a> | <a href="#statement">Statement</a> </p>

Current applications

Classification: Hierarchical models has been released; Plain ones will be released soon.

Object Detection: Will be released soon;

Semantic Segmentation: Will be released soon;

Human Pose: Will be released soon

News

24/01/2024

30/12/2023

27/03/2023

Abstract

<p align="left">This repository contains the code, models, test results for the paper <a href="https://arxiv.org/abs/2303.15105">Vision Transformer with Quadrangle Attention</a>, which is an substantial extention of our ECCV 2022 paper <a href="https://arxiv.org/pdf/2204.08446.pdf">VSA</a>. We extends the window-based attention to a general quadrangle formulation and propose a novel quadrangle attention. We employs an end-to-end learnable quadrangle regression module that predicts a transformation matrix to transform default windows into target quadrangles for token sampling and attention calculation, enabling the network to model various targets with different shapes and orientations and capture rich context information. With minor code modifications and negligible extra computational cost, our QFormer outperforms existing representative (hierarchical and plain) vision transformers on various vision tasks, including classification, object detection, semantic segmentation, and pose estimation.

Method

<figure> <img src="figs/opening.jpg"> <figcaption align = "center"><b>Fig.1 - The comparison of the current design (hand-crafted windows) and Quadrange attention.</b></figcaption> </figure> <figure> <img src="figs/pipeline-QA.jpg"> <figcaption align = "center"><b>Fig.2 - The pipeline of our proposed quadrangle attention (QA).</b></figcaption> </figure> <figure> <img src="figs/transformation.jpg"> <figcaption align = "center"><b>Fig.3 - The transformation process in quadrangle attention.</b></figcaption> </figure> <figure> <img src="figs/model.jpg"> <figcaption align = "center"><b>Fig.4 - The architecture of our plain QFormer<sub>p</sub> (a) and hierarchical QFormer<sub>h</sub> (b).</b></figcaption> </figure>

Usage

Requirements

The Apex is optional for faster training speed.

git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

Other Requirements

pip install opencv-python==4.4.0.46 termcolor==1.1.0 yacs==0.1.8 timm==0.4.9
pip install einops

Train & Eval

For classification on ImageNet-1K, to train from scratch, run:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -m torch.distributed.launch \
  --nnodes ${NNODES} \
  --node_rank ${SLURM_NODEID} \
  --master_addr ${MHOST} \
  --master_port 25901 \
  --nproc_per_node 8 \
  ./main.py \
  --cfg configs/swin/qformer_tiny_patch4_window7_224.yaml \
  --data-path ${IMAGE_PATH} \
  --batch-size 128 \
  --tag 1024-dpr20-coords_lambda1e-1 \
  --distributed \
  --coords_lambda 1e-1 \
  --drop_path_rate 0.2 \

For single GPU training, run

python ./main.py \
  --cfg configs/swin/qformer_tiny_patch4_window7_224.yaml \
  --data-path ${IMAGE_PATH} \
  --batch-size 128 \
  --tag 1024-dpr20-coords_lambda1e-1 \
  --coords_lambda 1e-1 \
  --drop_path_rate 0.2 \

To evaluate, run:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -m torch.distributed.launch \
  --nnodes ${NNODES} \
  --node_rank ${SLURM_NODEID} \
  --master_addr ${MHOST} \
  --master_port 25901 \
  --nproc_per_node 8 \
  ./main.py \
  --cfg configs/swin/qformer_tiny_patch4_window7_224.yaml \
  --data-path ${IMAGE_PATH} \
  --batch-size 128 \
  --tag eval \
  --distributed \
  --resume ${MODEL PATH} \
  --eval

For single GPU evaluation, run

python ./main.py \
  --cfg configs/swin/qformer_tiny_patch4_window7_224.yaml \
  --data-path ${IMAGE_PATH} \
  --batch-size 128 \
  --tag eval \
  --resume ${MODEL PATH} \
  --eval

Results

Results on plain models

Classification results on ImageNet-1K with MAE pretrained models

modelresolutionacc@1Weights & Logs
ViT-B + Window attn224x22481.2\
ViT-B + Shifted window224x22482.0\
QFormer<sub>p</sub>-B224x22482.9Coming soon

Detection results on COCO with MAE pretrained models and the Mask RCNN detector, following <a href="https://arxiv.org/abs/2203.16527">ViTDet</a>

modelbox mAPmask mAPParamsWeights & Logs
ViTDet-B51.645.9111M\
QFormer<sub>p</sub>-B52.346.6111MComing soon

Semantic segmentation results on ADE20k with MAE pretrained models and the UPerNet segmentor

modelimage sizemIoUmIoU*Weights & Logs
ViT-B + window attn512x51239.741.8\
ViT-B + shifted window attn512x51241.643.6\
QFormer<sub>p</sub>-B512x51243.645.0Coming soon
ViT-B + window attn640x64040.241.5\
ViT-B + shifted window attn640x64042.343.5\
QFormer<sub>p</sub>-B640x64044.946.0Coming soon

Human pose estimation results on COCO with MAE pretrained models, following <a href="https://arxiv.org/abs/2204.12484">ViTPose</a>

attentionmodelAPAP<sub>50</sub>ARAR<sub>50</sub>Weights & Logs
WindowViT-B66.487.772.991.9\
Shifted windowViT-B76.490.981.694.5\
QuadrangleViT-B77.090.982.094.7Coming soon
Window + FullViT-B76.990.882.194.7\
Shifted window + FullViT-B77.290.982.294.7\
Quadrangle + FullViT-B77.491.082.494.9Coming soon

Results on hierarchical models

Main Results on ImageNet-1K

nameresolutionacc@1acc@5acc@RealTop-1Weights & Logs
Swin-T224x22481.2\\\
DW-T224x22482.0\\\
Focal-T224x22482.295.9
QFormer<sub>h</sub>-T224x22482.596.287.5model & logs
Swin-S224x22483.296.2\\
Focal-S224x22483.596.2\\
QFormer<sub>h</sub>-S224x22484.096.888.6model & logs
Swin-B224x22483.496.5\\
DW-B224x22483.4\\\
Focal-B224x22483.896.5\\
QFormer<sub>h</sub>-B224x22484.196.888.7model & logs

Object Detection Results

Mask R-CNN

BackbonePretrainLr Schdbox mAPmask mAP#paramsconfiglogmodel
Swin-TImageNet-1K1x43.739.848M\\\
DAT-TImageNet-1K1x44.440.448M\\\
Focal-TImageNet-1K1x44.841.049M\\\
QFormer<sub>h</sub>-TImageNet-1K1x45.941.549Mconfiglogonedrive
Swin-TImageNet-1K3x46.041.648M\\\
DW-TImageNet-1K3x46.742.449M\\\
DAT-TImageNet-1K3x47.142.448M\\\
DAT-TImageNet-1K3x47.142.448M\\\
QFormer<sub>h</sub>-TImageNet-1K3x47.542.749Mconfiglogonedrive
Swin-SImageNet-1K3x48.543.369M\\\
Focal-SImageNet-1K3x48.843.871M\\\
DAT-SImageNet-1K3x49.044.069M\\\
QFormer<sub>h</sub>-SImageNet-1K3x49.544.270Mconfiglogonedrive

Cascade Mask R-CNN

BackbonePretrainLr Schdbox mAPmask mAP#paramsconfiglogmodel
Swin-TImageNet-1K1x48.141.786M\\\
DAT-TImageNet-1K1x49.142.586M\\\
QFormer<sub>h</sub>-TImageNet-1K1x49.843.087Mconfiglogonedrive
Swin-TImageNet-1K3x50.243.786M\\\
QFormer<sub>h</sub>-TImageNet-1K3x51.444.787Mconfiglogonedrive
Swin-SImageNet-1K3x51.945.0107M\\\
QFormer<sub>h</sub>-SImageNet-1K3x52.845.7108Mconfiglogonedrive

Semantic Segmentation Results for ADE20k

UperNet

BackbonePretrainLr SchdmIoUmIoU*#paramsconfiglogmodel
Swin-TImageNet-1k160k44.545.860M\\\
DAT-TImageNet-1k160k45.546.460M\\\
DW-TImageNet-1k160k45.746.961M\\\
Focal-TImageNet-1k160k45.847.062M\\\
QFormer<sub>h</sub>-TImageNet-1k160k46.948.161MComing soonComing soonComing soon
Swin-SImageNet-1k160k47.649.581M\\\
DAT-SImageNet-1k160k48.349.881M\\\
Focal-SImageNet-1k160k48.050.061M\\\
QFormer<sub>h</sub>-SImageNet-1k160k48.950.382MComing soonComing soonComing soon
Swin-BImageNet-1k160k48.149.7121M\\\
DW-BImageNet-1k160k48.750.3125M\\\
Focal-BImageNet-1k160k49.050.5126M\\\
QFormer<sub>h</sub>-BImageNet-1k160k49.550.6123MComing soonComing soonComing soon

Statement

This project is for research purpose only. For any other questions please contact qmzhangzz at hotmail.com.

The code base is borrowed from Swin.

Citing QFormer, VSA and ViTAE

@article{zhang2023vision,
  title={Vision Transformer with Quadrangle Attention},
  author={Zhang, Qiming and Zhang, Jing and Xu, Yufei and Tao, Dacheng},
  journal={arXiv preprint arXiv:2303.15105},
  year={2023}
}
@inproceedings{zhang2022vsa,
  title={VSA: learning varied-size window attention in vision transformers},
  author={Zhang, Qiming and Xu, Yufei and Zhang, Jing and Tao, Dacheng},
  booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part XXV},
  pages={466--483},
  year={2022},
  organization={Springer}
}
@article{zhang2023vitaev2,
  title={Vitaev2: Vision transformer advanced by exploring inductive bias for image recognition and beyond},
  author={Zhang, Qiming and Xu, Yufei and Zhang, Jing and Tao, Dacheng},
  journal={International Journal of Computer Vision},
  pages={1--22},
  year={2023},
  publisher={Springer}
}
@article{xu2021vitae,
  title={Vitae: Vision transformer advanced by exploring intrinsic inductive bias},
  author={Xu, Yufei and Zhang, Qiming and Zhang, Jing and Tao, Dacheng},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  year={2021}
}

Our other Transformer works

ViTPose: Please see <a href="https://github.com/ViTAE-Transformer/ViTPose">Baseline model ViTPose for human pose estimation</a>;

VSA: Please see <a href="https://github.com/ViTAE-Transformer/ViTAE-VSA">ViTAE-Transformer for Image Classification and Object Detection</a>;

ViTAE & ViTAEv2: Please see <a href="https://github.com/ViTAE-Transformer/ViTAE-Transformer">ViTAE-Transformer for Image Classification, Object Detection, and Sementic Segmentation</a>;

Matting: Please see <a href="https://github.com/ViTAE-Transformer/ViTAE-Transformer-Matting">ViTAE-Transformer for matting</a>;

Remote Sensing: Please see <a href="https://github.com/ViTAE-Transformer/ViTAE-Transformer-Remote-Sensing">ViTAE-Transformer for Remote Sensing</a>; <a href="https://github.com/ViTAE-Transformer/Remote-Sensing-RVSA">Advancing Plain Vision Transformer Towards Remote Sensing Foundation Model </a>;