Home

Awesome

Less is More: Pay Less Attention in Vision Transformers

License <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>

This is the official PyTorch implementation of AAAI 2022 paper: Less is More: Pay Less Attention in Vision Transformers.

By Zizheng Pan, Bohan Zhuang, Haoyu He, Jing Liu and Jianfei Cai.

In our paper, we present a novel Less attention vIsion Transformer (LIT), building upon the fact that the early self-attention layers in Transformers still focus on local patterns and bring minor benefits in recent hierarchical vision Transformers. LIT uses pure multi-layer perceptrons (MLPs) to encode rich local patterns in the early stages while applying self-attention modules to capture longer dependencies in deeper layers. Moreover, we further propose a learned deformable token merging module to adaptively fuse informative patches in a non-uniform manner.

If you use this code for a paper please cite:

@inproceedings{pan2022litv1,
  title={Less is More: Pay Less Attention in Vision Transformers},
  author={Pan, Zizheng and Zhuang, Bohan and He, Haoyu and Liu, Jing and Cai, Jianfei},
  booktitle = {AAAI},
  year={2022}
}

Updates

Usage

First, clone this repository.

git clone git@github.com:ziplab/LIT.git

Next, create a conda virtual environment.

# Make sure you have a NVIDIA GPU.
cd LIT/classification
bash setup_env.sh [conda_install_path] [env_name]

# For example
bash setup_env.sh /home/anaconda3 lit

Note: We use PyTorch 1.7.1 with CUDA 10.1 for all experiments. The setup_env.sh has illustrated all dependencies we used in our experiments. You may want to edit this file to install a different version of PyTorch or any other packages.

Image Classification on ImageNet

We provide baseline LIT models pretrained on ImageNet-1K. For training and evaluation code, please refer to classification.

NameParams (M)FLOPs (G)Top-1 Acc. (%)ModelLog
LIT-Ti193.681.1google drive/githublog
LIT-S274.181.5google drive/githublog
LIT-M488.683.0google drive/githublog
LIT-B8615.083.4google drive/githublog

Object Detection on COCO

For training and evaluation code, please refer to detection.

RetinaNet

BackboneParams (M)Lr schdbox mAPConfigModelLog
LIT-Ti301x41.6configgithublog
LIT-S391x41.6configgithublog

Mask R-CNN

BackboneParams (M)Lr schdbox mAPmask mAPConfigModelLog
LIT-Ti401x42.039.1configgithublog
LIT-S481x42.939.6configgithublog

Semantic Segmentation on ADE20K

For training and evaluation code, please refer to segmentation.

Semantic FPN

BackboneParams (M)ItersmIoUConfigModelLog
LIT-Ti248k41.3configgithublog
LIT-S328k41.7configgithublog

Offsets Visualisation

dpm_vis

We provide a script for visualising the learned offsets by the proposed deformable token merging modules (DTM). For example,

# activate your virtual env
conda activate lit
cd classification/code_for_lit_ti

# visualise
python visualize_offset.py --model lit_ti --resume [path/to/lit_ti.pth] --vis_image visualization/demo.JPEG

The plots will be automatically saved under visualization/, with a folder named by the name of the example image.

Attention Map Visualisation

We provide our method for visualising the attention maps in Figure 3. To save your time, we also provide the pretrained model for PVT with standard MSA in all stages.

NameParams (M)FLOPs (G)Top-1 Acc. (%)ModelLog
PVT w/ MSA208.480.9githublog
conda activate lit
cd classification/code_for_lit_ti

# visualise
# by default, we save the results under 'classification/code_for_lit_ti/attn_results'
python generate_attention_maps.py --data-path [/path/to/imagenet] --resume [/path/to/pvt_full_msa.pth]

The resulting folder contains the following items,

.
├── attention_map
│   ├── stage-0
│   │   ├── block0
│   │   │   └── pixel-1260-block-0-head-0.png
│   │   ├── block1
│   │   │   └── pixel-1260-block-1-head-0.png
│   │   └── block2
│   │       └── pixel-1260-block-2-head-0.png
│   ├── stage-1
│   ├── stage-2
│   └── stage-3
└── full_msa_eval_maps.npy

where full_msa_eval_maps.npy contains the saved attention maps in each block and each stage. The folder attention_map contains the visualisation results.

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

Acknowledgement

This repository has adopted codes from DeiT, PVT and Swin, we thank the authors for their open-sourced code.