Home

Awesome

Vision Transformers with Hierarchical Attention

This work is first titled "Transformer in Convolutional Neural Networks".

Installation

This repository exactly follows the code and the training settings of PVT.

Image classification on the ImageNet-1K dataset

MethodsSize#Params#FLOPsAcc@1Pretrained Models
HAT-Net-Tiny224 x 22412.7M2.0G79.8Google / Github
HAT-Net-Small224 x 22425.7M4.3G82.6Google / Github
HAT-Net-Medium224 x 22442.9M8.3G84.0Google / Github
HAT-Net-Large224 x 22463.1M11.5G84.2Google / Github

Citation

If you are using the code/models provided here in a publication, please consider citing:

@article{liu2024vision,
  title={Vision Transformers with Hierarchical Attention},
  author={Liu, Yun and Wu, Yu-Huan and Sun, Guolei and Zhang, Le and Chhatkuli, Ajad and Van Gool, Luc},
  journal={Machine Intelligence Research},
  volume={21},
  pages={670--683},
  year={2024},
  publisher={Springer}
}

@article{liu2021transformer,
  title={Transformer in Convolutional Neural Networks},
  author={Liu, Yun and Sun, Guolei and Qiu, Yu and Zhang, Le and Chhatkuli, Ajad and Van Gool, Luc},
  journal={arXiv preprint arXiv:2106.03180},
  year={2021}
}