Home

Awesome

Fast Vision Transformers with HiLo Attention👋(NeurIPS 2022 Spotlight)

License <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>

This is the official PyTorch implementation of Fast Vision Transformers with HiLo Attention.

By Zizheng Pan, Jianfei Cai, and Bohan Zhuang.

News

A Gentle Introduction

hilo

We introduce LITv2, a simple and effective ViT which performs favourably against the existing state-of-the-art methods across a spectrum of different model sizes with faster speed.

hilo

The core of LITv2: HiLo attention HiLo is inspired by the insight that high frequencies in an image capture local fine details and low frequencies focus on global structures, whereas a multi-head self-attention layer neglects the characteristic of different frequencies. Therefore, we propose to disentangle the high/low frequency patterns in an attention layer by separating the heads into two groups, where one group encodes high frequencies via self-attention within each local window, and another group performs the attention to model the global relationship between the average-pooled low-frequency keys from each window and each query position in the input feature map.

A Simple Demo

To quickly understand HiLo attention, you only need to install PyTorch and try the following code in the root directory of this repo.

from hilo import HiLo
import torch

model = HiLo(dim=384, num_heads=12, window_size=2, alpha=0.5)

x = torch.randn(64, 196, 384) # batch_size x num_tokens x hidden_dimension
out = model(x, 14, 14)
print(out.shape)
print(model.flops(14, 14)) # the numeber of FLOPs

Output:

torch.Size([64, 196, 384])
83467776

Installation

Requirements

Conda environment setup

Note: You can use the same environment to debug LITv1. Otherwise, you can create a new python virtual environment by the following script.

conda create -n lit python=3.7
conda activate lit

# Install Pytorch and TorchVision
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

pip install timm
pip install ninja
pip install tensorboard

# Install NVIDIA apex
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
cd ../
rm -rf apex/

# Build Deformable Convolution
cd mm_modules/DCN
python setup.py build install

pip install opencv-python==4.4.0.46 termcolor==1.1.0 yacs==0.1.8

Getting Started

For image classification on ImageNet, please refer to classification.

For object detection on COCO 2017, please refer to detection.

For semantic segmentation on ADE20K, please refer to segmentation.

Results and Model Zoo

Note: For your convenience, you can find all models and logs from Google Drive (4.8G in total). Alternatively, we also provide download links from github.

Image Classification on ImageNet-1K

All models are trained with 300 epochs with a total batch size of 1024 on 8 V100 GPUs.

ModelResolutionParams (M)FLOPs (G)Throughput (imgs/s)Train Mem (GB)Test Mem (GB)Top-1 (%)Download
LITv2-S224283.71,4715.11.282.0model & log
LITv2-M224497.58128.81.483.3model & log
LITv2-B2248713.260212.22.183.6model & log
LITv2-B3848739.719835.84.684.7model

By default, the throughput and memory footprint are tested on one RTX 3090 based on a batch size of 64. Memory is measured by the peak memory usage with torch.cuda.max_memory_allocated(). Throughput is averaged over 30 runs.

Pretrained LITv2-S with Different Values of Alpha

AlphaParams (M)Lo-Fi HeadsHi-Fi HeadsFLOPs (G)ImageNet Top1 (%)Download
0.0280123.9781.16github
0.2282103.8881.89github
0.428483.8281.81github
0.528663.7781.88github
0.728843.7481.94github
0.9281023.7382.03github
1.0281203.7081.89github

Pretrained weights from the experiments of Figure 4: Effect of α based on LITv2-S.

Object Detection on COCO 2017

All models are trained with 1x schedule (12 epochs) with a total batch size of 16 on 8 V100 GPUs.

RetinaNet

BackboneWindow SizeParams (M)FLOPs (G)FPSbox APConfigDownload
LITv2-S23824218.744.0configmodel & log
LITv2-S43823020.443.7configmodel & log
LITv2-M25934812.246.0configmodel & log
LITv2-M45931214.845.8configmodel & log
LITv2-B2974819.546.7configmodel & log
LITv2-B49743011.846.3configmodel & log

Mask R-CNN

BackboneWindow SizeParams (M)FLOPs (G)FPSbox APmask APConfigDownload
LITv2-S24726118.744.940.8configmodel & log
LITv2-S44724921.944.740.7configmodel & log
LITv2-M26836712.646.842.3configmodel & log
LITv2-M46831516.046.542.0configmodel & log
LITv2-B21065009.347.342.6configmodel & log
LITv2-B410644911.546.842.3configmodel & log

Semantic Segmentation on ADE20K

All models are trained with 80K iterations with a total batch size of 16 on 8 V100 GPUs.

BackboneParams (M)FLOPs (G)FPSmIoUConfigDownload
LITv2-S314142.644.3configmodel & log
LITv2-M526328.545.7configmodel & log
LITv2-B909327.547.2configmodel & log

Benchmarking Throughput on More GPUs

ModelParams (M)FLOPs (G)A100V100RTX 6000RTX 3090Top-1 (%)
ResNet-50264.11,4241,1238771,27980.4
PVT-S253.81,4607985481,00779.8
Twins-PCPVT-S243.81,45579252999881.2
Swin-Ti284.51,5641,03971096181.3
TNT-S245.280243129853481.3
CvT-13204.51,59571637994781.6
CoAtNet-0254.21,5389626431,15181.6
CaiT-XS24275.499148429962381.8
PVTv2-B2254.01,17567045185482.0
XCiT-S12264.81,7277615041,06882.0
ConvNext-Ti284.51,6547625711,07982.1
Focal-Tiny294.947137226138482.2
LITv2-S283.71,8741,3049281,47182.0

Single Attention Layer Benchmark

The following visualization results can refer to vit-attention-benchmark.

hilo_cpu_gpu

Citation

If you use LITv2 in your research, please consider the following BibTeX entry and giving us a star 🌟.

@inproceedings{pan2022hilo,
  title={Fast Vision Transformers with HiLo Attention},
  author={Pan, Zizheng and Cai, Jianfei and Zhuang, Bohan},
  booktitle={NeurIPS},
  year={2022}
}

If you find the code useful, please also consider the following BibTeX entry

@inproceedings{pan2022litv1,
  title={Less is More: Pay Less Attention in Vision Transformers},
  author={Pan, Zizheng and Zhuang, Bohan and He, Haoyu and Liu, Jing and Cai, Jianfei},
  booktitle={AAAI},
  year={2022}
}

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

Acknowledgement

This repository is built upon DeiT, Swin and LIT, we thank the authors for their open-sourced code.