Home

Awesome

BiFormer: Vision Transformer with Bi-Level Routing Attention

Official PyTorch implementation of BiFormer, from the following paper:

BiFormer: Vision Transformer with Bi-Level Routing Attention. CVPR 2023.
Lei Zhu, Xinjiang Wang, Zhanghan Ke, Wayne Zhang, and Rynson Lau


<p align="left"> <img src="assets/teaser.png" width=60% height=60% class="center"> </p> <!-- ✅ ⬜️ -->

News

Results and Pre-trained Models

ImageNet-1K trained models

nameresolutionacc@1#paramsFLOPsmodellogtensorboard log<sup>*</sup>
BiFormer-T224x22481.413.1 M2.2 Gmodellog-
BiFormer-S224x22483.825.5 M4.5 Gmodellogtensorboard.dev
BiFormer-B224x22484.356.8 M9.8 Gmodellog-
BiFormer-STL224x22482.728.4 M4.6 Gmodellog-
BiFormer-STL-nchw224x22482.728.4 M4.6 Gmodellogtensorboard.dev

<font size=1>* : reproduced after the acceptance of our paper.</font>

Here the BiFormer-STL(Swin-Tiny-Layout) model is used in our ablation study. We hope it provides a good start proint for developing your own awsome attention mechanisms.

All files can be accessed from onedrive.

Installation

Please check INSTALL.md for installation instructions.

Evaluation

We did evaluation on a slurm cluster environment, using the command below:

python hydra_main.py \
    data_path=./data/in1k input_size=224  batch_size=128 dist_eval=true \
    +slurm=${CLUSTER_ID} slurm.nodes=1 slurm.ngpus=8 \
    eval=true load_release=true model='biformer_small'

To test on a local machine, you may try

python -m torch.distributed.launch --nproc_per_node=8 main.py \
  --data_path ./data/in1k --input_size 224 --batch_size 128 --dist_eval \
  --eval --load_release --model biformer_small

This should give

* Acc@1 83.754 Acc@5 96.638 loss 0.869
Accuracy of the network on the 50000 test images: 83.8%

Note: By setting load_release=true, the released checkpoints will be automatically downloaded, so you do not need to download manually in advance.

Training

To launch training on a slurm cluster, use the command below:

python hydra_main.py \
    data_path=./data/in1k input_size=224  batch_size=128 dist_eval=true \
    +slurm=${CLUSTER_ID} slurm.nodes=1 slurm.ngpus=8 \
    model='biformer_small'  drop_path=0.15 lr=5e-4

Note: Our codebase automatically generates output directory for experiment logs and checkpoints, according to the passed arguments. For example, the command above will produce an output directory like

$ tree -L 3 outputs/ 
outputs/
└── cls
    └── batch_size.128-drop_path.0.15-input_size.224-lr.5e-4-model.biformer_small-slurm.ngpus.8-slurm.nodes.2
        └── 20230307-21:33:26

Acknowledgement

This repository is built using the timm library, and ConvNext, UniFormer repositories.

License

This project is released under the MIT license. Please see the LICENSE file for more information.

Citation

If you find this repository helpful, please consider citing:

@Article{zhu2023biformer,
  author  = {Lei Zhu and Xinjiang Wang and Zhanghan Ke and Wayne Zhang and Rynson Lau},
  title   = {BiFormer: Vision Transformer with Bi-Level Routing Attention},
  journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year    = {2023},
}

TODOs