Home

Awesome

<div align="center">

GroupMixFormer: Advancing Vision Transformers with Group-Mix Attention

Chongjian Ge, Xiaohan Ding, Zhan Tong, Li Yuan, Jiangliu Wang, Yibing Song, Ping Luo <br>

</div>

Official PyTorch implementation of GroupMixFormer for the paper:

<img src="./pics/teaser.png" alt="Image Description">

🐱 Abstract

<b>TL; DR: </b>

<p style="text-align: justify;"> We introduce GroupMixFormer, which employs Group-Mix Attention (GMA) as an advanced substitute for conventional self-attention. GMA is designed to concurrently capture correlations between tokens as well as between different groups of tokens, accommodating diverse group sizes. </p> <details><summary><b>Full abstract</b></summary> <p style="text-align: justify;"> Vision Transformers (ViTs) have shown to enhance visual recognition through modeling long-range dependencies with multi-head self-attentions (MHSA), which is typically formulated as Query-Key-Value computation. However, the attention map generated from the Query and Key only captures token-to-token correlations at one single granularity. In this paper, we argue that self-attention should have a more comprehensive mechanism to capture correlations among tokens and groups (i.e., multiple adjacent tokens) for higher representational capacity. Thereby, we propose Group-Mix Attention (GMA) as an advanced replacement for traditional self-attention, which can simultaneously capture token-to-token, token-to-group, and group-to-group correlations with various group sizes. To this end, GMA splits the Query, Key, and Value into segments uniformly and performs different group aggregations to generate group proxies. The attention map is computed based on the mixtures of tokens and group proxies and used to re-combine the tokens and groups in Value. Based on GMA, we introduce a powerful backbone, namely GroupMixFormer, which achieves state-of-the-art performance in image classification, object detection, and semantic segmentation with fewer parameters than existing models. For instance, GroupMixFormer-L (with 70.3M parameters and 384^2 input) attains 86.2% Top-1 accuracy on ImageNet-1K without external data, while GroupMixFormer-B (with 45.8M parameters) attains 51.2% mIoU on ADE20K. </p> </details>

🚩 Updates

New features

Catalog

βš™οΈ Usage

1 - Installation

conda create -n groupmixformer python=3.8 -y
conda activate groupmixformer
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
git clone https://github.com/AILab-CVC/GroupMixFormer.git
pip install timm==0.4.12 tensorboardX six tensorboard ipdb yacs tqdm fvcore

2 - Data Preparation

Download and extract ImageNet train and val images from http://image-net.org/. The directory structure is:

β”‚path/to/imagenet/
β”œβ”€β”€train/
β”‚  β”œβ”€β”€ n01440764
β”‚  β”‚   β”œβ”€β”€ n01440764_10026.JPEG
β”‚  β”‚   β”œβ”€β”€ n01440764_10027.JPEG
β”‚  β”‚   β”œβ”€β”€ ......
β”‚  β”œβ”€β”€ ......
β”œβ”€β”€val/
β”‚  β”œβ”€β”€ n01440764
β”‚  β”‚   β”œβ”€β”€ ILSVRC2012_val_00000293.JPEG
β”‚  β”‚   β”œβ”€β”€ ILSVRC2012_val_00002138.JPEG
β”‚  β”‚   β”œβ”€β”€ ......
β”‚  β”œβ”€β”€ ......

3 - Trianing Scripts

To train GroupMixFormer-Small on ImageNet-1k on a single node with 8 gpus for 300 epochs, please run:

python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --use_env train.py \
  --data-path <Your data path> \
  --batch-size 64 \
  --output <Your target output path> \
  --cfg ./configs/groupmixformer_small.yaml \
  --model-type groupmixformer \
  --model-file groupmixformer.py \
  --tag groupmixformer_small

or you can simply run the following script:

bash launch_scripts/run_train.sh

For multi-node training, please refer to the code: multi_machine_start.py

4 - Inference Scripts

To eval GroupMixFormer-Small on ImageNet-1k on a single node, please identify the path of pretrained weight and run:

CUDA_VISIBLE_DEVICES=1 OMP_NUM_THREADS=1 python3 -m torch.distributed.launch --nproc_per_node 1 --nnodes 1 --use_env test.py \
  --data-path <Your data path> \
  --batch-size 64 \
  --output <Your target output path> \
  --cfg ./configs/groupmixformer_small.yaml \
  --model-type groupmixformer \
  --model-file groupmixformer.py \
  --tag groupmixformer_small

or you can simply run the following script:

bash launch_scripts/run_eval.sh

This should give

* Acc@1 83.400 Acc@5 96.464

⏬ Model Zoo

We provide GroupMixFormer models pretrained on ImageNet 2012. You can download the corresponding pretrained and move it to ./pretrained folder.

nameresolutionacc@1#paramsFLOPsmodel - configs
GroupMixFormer-M224x22479.65.7M1.4Gmodel - configs
GroupMixFormer-T224x22482.611.0M3.7Gmodel - configs
GroupMixFormer-S224x22483.422.4M5.2Gmodel - configs
GroupMixFormer-B224x22484.745.8M17.6Gmodel - configs
GroupMixFormer-L224x22485.070.3M36.1Gmodel - configs

πŸ€— Acknowledgement

This repository is built using the timm library, DeiT and Swin repositories.

πŸ—œοΈ License

This project is released under the MIT license. Please see the LICENSE file for more information.

πŸ“– Citation

If you find this repository helpful, please consider citing:

@Article{xxx
}