Home

Awesome

English | 简体中文

PaddlePaddle Vision Transformers

GitHub CodeFactor CLA assistant GitHub Repo stars

<p align="center"> <img src="./PaddleViT.png" width="100%"/> </p>

State-of-the-art Visual Transformer and MLP Models for PaddlePaddle

:robot: PaddlePaddle Visual Transformers (PaddleViT or PPViT) is a collection of vision models beyond convolution. Most of the models are based on Visual Transformers, Visual Attentions, and MLPs, etc. PaddleViT also integrates popular layers, utilities, optimizers, schedulers, data augmentations, training/validation scripts for PaddlePaddle 2.1+. The aim is to reproduce a wide variety of state-of-the-art ViT and MLP models with full training/validation procedures. We are passionate about making cuting-edge CV techniques easier to use for everyone.

:robot: PaddleViT provides models and tools for multiple vision tasks, such as classifications, object detection, semantic segmentation, GAN, and more. Each model architecture is defined in standalone python module and can be modified to enable quick research experiments. At the same time, pretrained weights can be downloaded and used to finetune on your own datasets. PaddleViT also integrates popular tools and modules for custimized dataset, data preprocessing, performance metrics, DDP and more.

:robot: PaddleViT is backed by popular deep learning framework PaddlePaddle, we also provide tutorials and projects on Paddle AI Studio. It's intuitive and straightforward to get started for new users.

Quick Links

PaddleViT implements model architectures and tools for multiple vision tasks, go to the following links for detailed information.

We also provide tutorials:

Features

  1. State-of-the-art

    • State-of-the-art transformer models for multiple CV tasks
    • State-of-the-art data processings and training methods
    • We keep pushing it forward.
  2. Easy-to-use tools

    • Easy configs for model vairants
    • Modular design for utiliy functions and tools
    • Low barrier for educators and practitioners
    • Unified framework for all the models
  3. Easily customizable to your needs

    • Examples for each model to reproduce the results
    • Model implementations are exposed for you to customize
    • Model files can be used independently for quick experiments
  4. High Performance

    • DDP (multiprocess training/validation where each process runs on a single GPU).

    • Mixed-precision support (AMP)

Model architectures

Image Classification (Transformers)

  1. ViT (from Google), released with paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
  2. DeiT (from Facebook and Sorbonne), released with paper Training data-efficient image transformers & distillation through attention, by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
  3. Swin Transformer (from Microsoft), released with paper Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
  4. VOLO (from Sea AI Lab and NUS), released with paper VOLO: Vision Outlooker for Visual Recognition, by Li Yuan, Qibin Hou, Zihang Jiang, Jiashi Feng, Shuicheng Yan.
  5. CSwin Transformer (from USTC and Microsoft), released with paper CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows , by Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, Baining Guo.
  6. CaiT (from Facebook and Sorbonne), released with paper Going deeper with Image Transformers, by Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, Hervé Jégou.
  7. PVTv2 (from NJU/HKU/NJUST/IIAI/SenseTime), released with paper PVTv2: Improved Baselines with Pyramid Vision Transformer, by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao.
  8. Shuffle Transformer (from Tencent), released with paper Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer, by Zilong Huang, Youcheng Ben, Guozhong Luo, Pei Cheng, Gang Yu, Bin Fu.
  9. T2T-ViT (from NUS and YITU), released with paper Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet , by Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, Shuicheng Yan.
  10. CrossViT (from IBM), released with paper CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification, by Chun-Fu Chen, Quanfu Fan, Rameswar Panda.
  11. BEiT (from Microsoft Research), released with paper BEiT: BERT Pre-Training of Image Transformers, by Hangbo Bao, Li Dong, Furu Wei.
  12. Focal Transformer (from Microsoft), released with paper Focal Self-attention for Local-Global Interactions in Vision Transformers, by Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan and Jianfeng Gao.
  13. Mobile-ViT (from Apple), released with paper MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer, by Sachin Mehta, Mohammad Rastegari.
  14. ViP (from National University of Singapore), released with Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition, by Qibin Hou and Zihang Jiang and Li Yuan and Ming-Ming Cheng and Shuicheng Yan and Jiashi Feng.
  15. XCiT (from Facebook/Inria/Sorbonne), released with paper XCiT: Cross-Covariance Image Transformers, by Alaaeldin El-Nouby, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, Hervé Jegou.
  16. PiT (from NAVER/Sogan University), released with paper Rethinking Spatial Dimensions of Vision Transformers, by Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh.
  17. HaloNet, (from Google), released with paper Scaling Local Self-Attention for Parameter Efficient Visual Backbones, by Ashish Vaswani, Prajit Ramachandran, Aravind Srinivas, Niki Parmar, Blake Hechtman, Jonathon Shlens.
  18. PoolFormer, (from Sea AI Lab/NUS), released with paper MetaFormer is Actually What You Need for Vision, by Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang, Jiashi Feng, Shuicheng Yan.
  19. BoTNet, (from UC Berkeley/Google), released with paper Bottleneck Transformers for Visual Recognition, by Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, Ashish Vaswani.
  20. CvT (from McGill/Microsoft), released with paper CvT: Introducing Convolutions to Vision Transformers, by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang
  21. HvT (from Monash University), released with paper Scalable Vision Transformers with Hierarchical Pooling, by Zizheng Pan, Bohan Zhuang, Jing Liu, Haoyu He, Jianfei Cai.

Image Classification (MLP & others)

  1. MLP-Mixer (from Google), released with paper MLP-Mixer: An all-MLP Architecture for Vision, by Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, Alexey Dosovitskiy
  2. ResMLP (from Facebook/Sorbonne/Inria/Valeo), released with paper ResMLP: Feedforward networks for image classification with data-efficient training, by Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin El-Nouby, Edouard Grave, Gautier Izacard, Armand Joulin, Gabriel Synnaeve, Jakob Verbeek, Hervé Jégou.
  3. gMLP (from Google), released with paper Pay Attention to MLPs, by Hanxiao Liu, Zihang Dai, David R. So, Quoc V. Le.
  4. FF Only (from Oxford), released with paper Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet, by Luke Melas-Kyriazi.
  5. RepMLP (from BNRist/Tsinghua/MEGVII/Aberystwyth), released with paper RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition, by Xiaohan Ding, Chunlong Xia, Xiangyu Zhang, Xiaojie Chu, Jungong Han, Guiguang Ding.
  6. CycleMLP (from HKU/SenseTime), released with paper CycleMLP: A MLP-like Architecture for Dense Prediction, by Shoufa Chen, Enze Xie, Chongjian Ge, Ding Liang, Ping Luo.
  7. ConvMixer (from Anonymous), released with Patches Are All You Need?, by Anonymous.
  8. ConvMLP (from UO/UIUC/PAIR), released with ConvMLP: Hierarchical Convolutional MLPs for Vision, by Jiachen Li, Ali Hassani, Steven Walton, Humphrey Shi.

Coming Soon:

  1. DynamicViT (from Tsinghua/UCLA/UW), released with paper DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification, by Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, Cho-Jui Hsieh.

Detection

  1. DETR (from Facebook), released with paper End-to-End Object Detection with Transformers, by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
  2. Swin Transformer (from Microsoft), released with paper Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
  3. PVTv2 (from NJU/HKU/NJUST/IIAI/SenseTime), released with paper PVTv2: Improved Baselines with Pyramid Vision Transformer, by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao.

Coming Soon:

  1. Focal Transformer (from Microsoft), released with paper Focal Self-attention for Local-Global Interactions in Vision Transformers, by Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan and Jianfeng Gao.
  2. UP-DETR (from Tencent), released with paper UP-DETR: Unsupervised Pre-training for Object Detection with Transformers, by Zhigang Dai, Bolun Cai, Yugeng Lin, Junying Chen.

Semantic Segmentation

Now:

  1. SETR (from Fudan/Oxford/Surrey/Tencent/Facebook), released with paper Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers, by Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H.S. Torr, Li Zhang.
  2. DPT (from Intel), released with paper Vision Transformers for Dense Prediction, by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
  3. Swin Transformer (from Microsoft), released with paper Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
  4. Segmenter (from Inria), realeased with paper Segmenter: Transformer for Semantic Segmentation, by Robin Strudel, Ricardo Garcia, Ivan Laptev, Cordelia Schmid.
  5. Trans2seg (from HKU/Sensetime/NJU), released with paper Segmenting Transparent Object in the Wild with Transformer, by Enze Xie, Wenjia Wang, Wenhai Wang, Peize Sun, Hang Xu, Ding Liang, Ping Luo.
  6. SegFormer (from HKU/NJU/NVIDIA/Caltech), released with paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers, by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
  7. CSwin Transformer (from USTC and Microsoft), released with paper [CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows

Coming Soon:

  1. FTN (from Baidu), released with paper Fully Transformer Networks for Semantic Image Segmentation, by Sitong Wu, Tianyi Wu, Fangjian Lin, Shengwei Tian, Guodong Guo.
  2. Shuffle Transformer (from Tencent), released with paper Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer, by Zilong Huang, Youcheng Ben, Guozhong Luo, Pei Cheng, Gang Yu, Bin Fu
  3. Focal Transformer (from Microsoft), released with paper Focal Self-attention for Local-Global Interactions in Vision Transformers, by Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan and Jianfeng Gao. ](https://arxiv.org/abs/2107.00652), by Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, Baining Guo.

GAN

  1. TransGAN (from Seoul National University and NUUA), released with paper TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up, by Yifan Jiang, Shiyu Chang, Zhangyang Wang.
  2. Styleformer (from Facebook and Sorbonne), released with paper Styleformer: Transformer based Generative Adversarial Networks with Style Vector, by Jeeseung Park, Younggeun Kim.

Coming Soon:

  1. ViTGAN (from UCSD/Google), released with paper ViTGAN: Training GANs with Vision Transformers, by Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu.

Installation

Prerequistites

Note: It is recommended to install the latest version of PaddlePaddle to avoid some CUDA errors for PaddleViT training. For PaddlePaddle, please refer to this link for stable version installation and this link for develop version installation.

Installation

  1. Create a conda virtual environment and activate it.

    conda create -n paddlevit python=3.7 -y
    conda activate paddlevit
    
  2. Install PaddlePaddle following the official instructions, e.g.,

    conda install paddlepaddle-gpu==2.1.2 cudatoolkit=10.2 --channel https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/Paddle/
    

    Note: please change the paddlepaddle version and cuda version accordingly to your environment.

  3. Install dependency packages

    • General dependencies:
      pip install yacs pyyaml
      
    • Packages for Segmentation:
      pip install cityscapesScripts
      
      Install detail package:
      git clone https://github.com/ccvl/detail-api
      cd detail-api/PythonAPI
      make
      make install
      
    • Packages for GAN:
      pip install lmdb
      
  4. Clone project from GitHub

    git clone https://github.com/BR-IDL/PaddleViT.git 
    

Results (Model Zoo)

Image Classification

ModelAcc@1Acc@5#ParamsFLOPsImage SizeCrop pctInterpLink
vit_base_patch32_22480.6895.6188.2M4.4G2240.875bicubicgoogle/baidu(ubyr)
vit_base_patch32_38483.3596.8488.2M12.7G3841.0bicubicgoogle/baidu(3c2f)
vit_base_patch16_22484.5897.3086.4M17.0G2240.875bicubicgoogle/baidu(qv4n)
vit_base_patch16_38485.9998.0086.4M49.8G3841.0bicubicgoogle/baidu(wsum)
vit_large_patch16_22485.8197.82304.1M59.9G2240.875bicubicgoogle/baidu(1bgk)
vit_large_patch16_38487.0898.30304.1M175.9G3841.0bicubicgoogle/baidu(5t91)
vit_large_patch32_38481.5196.09306.5M44.4G3841.0bicubicgoogle/baidu(ieg3)
swin_t_22481.3795.5428.3M4.4G2240.9bicubicgoogle/baidu(h2ac)
swin_s_22483.2196.3249.6M8.6G2240.9bicubicgoogle/baidu(ydyx)
swin_b_22483.6096.4687.7M15.3G2240.9bicubicgoogle/baidu(h4y6)
swin_b_38484.4896.8987.7M45.5G3841.0bicubicgoogle/baidu(7nym)
swin_b_224_22kto1k85.2797.5687.7M15.3G2240.9bicubicgoogle/baidu(6ur8)
swin_b_384_22kto1k86.4398.0787.7M45.5G3841.0bicubicgoogle/baidu(9squ)
swin_l_224_22kto1k86.3297.90196.4M34.3G2240.9bicubicgoogle/baidu(nd2f)
swin_l_384_22kto1k87.1498.23196.4M100.9G3841.0bicubicgoogle/baidu(5g5e)
deit_tiny_distilled_22474.5291.905.9M1.1G2240.875bicubicgoogle/baidu(rhda)
deit_small_distilled_22481.1795.4122.4M4.3G2240.875bicubicgoogle/baidu(pv28)
deit_base_distilled_22483.3296.4987.2M17.0G2240.875bicubicgoogle/baidu(5f2g)
deit_base_distilled_38485.4397.3387.2M49.9G3841.0bicubicgoogle/baidu(qgj2)
volo_d1_22484.1296.7826.6M6.6G2241.0bicubicgoogle/baidu(xaim)
volo_d1_38485.2497.2126.6M19.5G3841.0bicubicgoogle/baidu(rr7p)
volo_d2_22485.1197.1958.6M13.7G2241.0bicubicgoogle/baidu(d82f)
volo_d2_38486.0497.5758.6M40.7G3841.0bicubicgoogle/baidu(9cf3)
volo_d3_22485.4197.2686.2M19.8G2241.0bicubicgoogle/baidu(a5a4)
volo_d3_44886.5097.7186.2M80.3G4481.0bicubicgoogle/baidu(uudu)
volo_d4_22485.8997.54192.8M42.9G2241.0bicubicgoogle/baidu(vcf2)
volo_d4_44886.7097.85192.8M172.5G4481.0bicubicgoogle/baidu(nd4n)
volo_d5_22486.0897.58295.3M70.6G2241.0bicubicgoogle/baidu(ymdg)
volo_d5_44886.9297.88295.3M283.8G4481.0bicubicgoogle/baidu(qfcc)
volo_d5_51287.0597.97295.3M371.3G5121.15bicubicgoogle/baidu(353h)
cswin_tiny_22482.8196.3022.3M4.2G2240.9bicubicgoogle/baidu(4q3h)
cswin_small_22483.6096.5834.6M6.5G2240.9bicubicgoogle/baidu(gt1a)
cswin_base_22484.2396.9177.4M14.6G2240.9bicubicgoogle/baidu(wj8p)
cswin_base_38485.5197.4877.4M43.1G3841.0bicubicgoogle/baidu(rkf5)
cswin_large_22486.5297.99173.3M32.5G2240.9bicubicgoogle/baidu(b5fs)
cswin_large_38487.4998.35173.3M96.1G3841.0bicubicgoogle/baidu(6235)
cait_xxs24_22478.3894.3211.9M2.2G2241.0bicubicgoogle/baidu(j9m8)
cait_xxs36_22479.7594.8817.2M33.1G2241.0bicubicgoogle/baidu(nebg)
cait_xxs24_38480.9795.6411.9M6.8G3841.0bicubicgoogle/baidu(2j95)
cait_xxs36_38482.2096.1517.2M10.1G3841.0bicubicgoogle/baidu(wx5d)
cait_s24_22483.4596.5746.8M8.7G2241.0bicubicgoogle/baidu(m4pn)
cait_xs24_38484.0696.8926.5M15.1G3841.0bicubicgoogle/baidu(scsv)
cait_s24_38485.0597.3446.8M26.5G3841.0bicubicgoogle/baidu(dnp7)
cait_s36_38485.4597.4868.1M39.5G3841.0bicubicgoogle/baidu(e3ui)
cait_m36_38486.0697.73270.7M156.2G3841.0bicubicgoogle/baidu(r4hu)
cait_m48_44886.4997.75355.8M287.3G4481.0bicubicgoogle/baidu(imk5)
pvtv2_b070.4790.163.7M0.6G2240.875bicubicgoogle/baidu(dxgb)
pvtv2_b178.7094.4914.0M2.1G2240.875bicubicgoogle/baidu(2e5m)
pvtv2_b282.0295.9925.4M4.0G2240.875bicubicgoogle/baidu(are2)
pvtv2_b2_linear82.0696.0422.6M3.9G2240.875bicubicgoogle/baidu(a4c8)
pvtv2_b383.1496.4745.2M6.8G2240.875bicubicgoogle/baidu(nc21)
pvtv2_b483.6196.6962.6M10.0G2240.875bicubicgoogle/baidu(tthf)
pvtv2_b583.7796.6182.0M11.5G2240.875bicubicgoogle/baidu(9v6n)
shuffle_vit_tiny82.3996.0528.5M4.6G2240.875bicubicgoogle/baidu(8a1i)
shuffle_vit_small83.5396.5750.1M8.8G2240.875bicubicgoogle/baidu(xwh3)
shuffle_vit_base83.9596.9188.4M15.5G2240.875bicubicgoogle/baidu(1gsr)
t2t_vit_771.6890.894.3M1.0G2240.9bicubicgoogle/baidu(1hpa)
t2t_vit_1075.1592.805.8M1.3G2240.9bicubicgoogle/baidu(ixug)
t2t_vit_1276.4893.496.9M1.5G2240.9bicubicgoogle/baidu(qpbb)
t2t_vit_1481.5095.6721.5M4.4G2240.9bicubicgoogle/baidu(c2u8)
t2t_vit_1981.9395.7439.1M7.8G2240.9bicubicgoogle/baidu(4in3)
t2t_vit_2482.2895.8964.0M12.8G2240.9bicubicgoogle/baidu(4in3)
t2t_vit_t_1481.6995.8521.5M4.4G2240.9bicubicgoogle/baidu(4in3)
t2t_vit_t_1982.4496.0839.1M7.9G2240.9bicubicgoogle/baidu(mier)
t2t_vit_t_2482.5596.0764.0M12.9G2240.9bicubicgoogle/baidu(6vxc)
t2t_vit_14_38483.3496.5021.5M13.0G3841.0bicubicgoogle/baidu(r685)
cross_vit_tiny_22473.2091.906.9M1.3G2240.875bicubicgoogle/baidu(scvb)
cross_vit_small_22481.0195.3326.7M5.2G2240.875bicubicgoogle/baidu(32us)
cross_vit_base_22482.1295.87104.7M20.2G2240.875bicubicgoogle/baidu(jj2q)
cross_vit_9_22473.7891.938.5M1.6G2240.875bicubicgoogle/baidu(mjcb)
cross_vit_15_22481.5195.7227.4M5.2G2240.875bicubicgoogle/baidu(n55b)
cross_vit_18_22482.2996.0043.1M8.3G2240.875bicubicgoogle/baidu(xese)
cross_vit_9_dagger_22476.9293.618.7M1.7G2240.875bicubicgoogle/baidu(58ah)
cross_vit_15_dagger_22482.2395.9328.1M5.6G2240.875bicubicgoogle/baidu(qwup)
cross_vit_18_dagger_22482.5196.0344.1M8.7G2240.875bicubicgoogle/baidu(qtw4)
cross_vit_15_dagger_38483.7596.7528.1M16.4G3841.0bicubicgoogle/baidu(w71e)
cross_vit_18_dagger_38484.1796.8244.1M25.8G3841.0bicubicgoogle/baidu(99b6)
beit_base_patch16_224_pt22k85.2197.6687M12.7G2240.9bicubicgoogle/baidu(fshn)
beit_base_patch16_384_pt22k86.8198.1487M37.3G3841.0bicubicgoogle/baidu(arvc)
beit_large_patch16_224_pt22k87.4898.30304M45.0G2240.9bicubicgoogle/baidu(2ya2)
beit_large_patch16_384_pt22k88.4098.60304M131.7G3841.0bicubicgoogle/baidu(qtrn)
beit_large_patch16_512_pt22k88.6098.66304M234.0G5121.0bicubicgoogle/baidu(567v)
Focal-T82.0395.8628.9M4.9G2240.875bicubicgoogle/baidu(i8c2)
Focal-T (use conv)82.7096.1430.8M4.9G2240.875bicubicgoogle/baidu(smrk)
Focal-S83.5596.2951.1M9.4G2240.875bicubicgoogle/baidu(dwd8)
Focal-S (use conv)83.8596.4753.1M9.4G2240.875bicubicgoogle/baidu(nr7n)
Focal-B83.9896.4889.8M16.4G2240.875bicubicgoogle/baidu(8akn)
Focal-B (use conv)84.1896.6193.3M16.4G2240.875bicubicgoogle/baidu(5nfi)
mobilevit_xxs70.3189.681.32M0.44G2561.0bicubicgoogle/baidu(axpc)
mobilevit_xs74.4792.022.33M0.95G2561.0bicubicgoogle/baidu(hfhm)
mobilevit_s76.7493.085.59M1.88G2561.0bicubicgoogle/baidu(34bg)
mobilevit_s $\dag$77.8393.835.59M1.88G2561.0bicubicgoogle/baidu(92ic)
vip_s781.5095.7625.1M7.0G2240.875bicubicgoogle/baidu(mh9b)
vip_m782.7596.0555.3M16.4G2240.875bicubicgoogle/baidu(hvm8)
vip_l783.1896.3787.8M24.5G2240.875bicubicgoogle/baidu(tjvh)
xcit_nano_12_p16_224_dist72.3290.860.6G3.1M2241.0bicubicgoogle/baidu(7qvz)
xcit_nano_12_p16_384_dist75.4692.701.6G3.1M3841.0bicubicgoogle/baidu(1y2j)
xcit_large_24_p16_224_dist84.9297.1335.9G189.1M2241.0bicubicgoogle/baidu(kfv8)
xcit_large_24_p16_384_dist85.7697.54105.5G189.1M3841.0bicubicgoogle/baidu(ffq3)
xcit_nano_12_p8_224_dist76.3393.102.2G3.0M2241.0bicubicgoogle/baidu(jjs7)
xcit_nano_12_p8_384_dist77.8294.046.3G3.0M3841.0bicubicgoogle/baidu(dmc1)
xcit_large_24_p8_224_dist85.4097.40141.4G188.9M2241.0bicubicgoogle/baidu(y7gw)
xcit_large_24_p8_384_dist85.9997.69415.5G188.9M3841.0bicubicgoogle/baidu(9xww)
pit_ti72.9191.404.8M0.5G2240.9bicubicgoogle/baidu(ydmi)
pit_ti_distill74.5492.105.1M0.5G2240.9bicubicgoogle/baidu(7k4s)
pit_xs78.1894.1610.5M1.1G2240.9bicubicgoogle/baidu(gytu)
pit_xs_distill79.3194.3610.9M1.1G2240.9bicubicgoogle/baidu(ie7s)
pit_s81.0895.3323.4M2.4G2240.9bicubicgoogle/baidu(kt1n)
pit_s_distill81.9995.7924.0M2.5G2240.9bicubicgoogle/baidu(hhyc)
pit_b82.4495.7173.5M10.6G2240.9bicubicgoogle/baidu(uh2v)
pit_b_distill84.1496.8674.5M10.7G2240.9bicubicgoogle/baidu(3e6g)
halonet26t79.1094.3112.5M3.2G2560.95bicubicgoogle/baidu(ednv)
halonet50ts81.6595.6122.8M5.1G2560.94bicubicgoogle/baidu(3j9e)
poolformer_s1277.2493.5111.9M1.8G2240.9bicubicgoogle/baidu(zcv4)
poolformer_s2480.3395.0521.3M3.4G2240.9bicubicgoogle/baidu(nedr)
poolformer_s3681.4395.4530.8M5.0G2240.9bicubicgoogle/baidu(fvpm)
poolformer_m3682.1195.6956.1M8.9G2240.95bicubicgoogle/baidu(whfp)
poolformer_m4882.4695.9673.4M11.8G2240.95bicubicgoogle/baidu(374f)
botnet5077.3893.5620.9M5.3G2240.875bicubicgoogle/baidu(wh13)
CvT-13-22481.5995.6720M4.5G2240.875bicubicgoogle/baidu(vev9)
CvT-21-22482.4696.0032M7.1G2240.875bicubicgoogle/baidu(t2rv)
CvT-13-38483.0096.3620M16.3G3841.0bicubicgoogle/baidu(wswt)
CvT-21-38483.2796.1632M24.9G3841.0bicubicgoogle/baidu(hcem)
CvT-13-384-22k83.2697.0920M16.3G3841.0bicubicgoogle/baidu(c7m9)
CvT-21-384-22k84.9197.6232M24.9G3841.0bicubicgoogle/baidu(9jxe)
CvT-w24-384-22k87.5898.47277M193.2G3841.0bicubicgoogle/baidu(bbj2)
HVT-Ti-169.4589.285.7M0.6G2240.875bicubicgoogle/baidu(egds)
HVT-S-080.3095.1522.0M4.6G2240.875bicubicgoogle/baidu(hj7a)
HVT-S-178.0693.8422.1M2.4G2240.875bicubicgoogle/baidu(tva8)
HVT-S-277.4193.4822.1M1.9G2240.875bicubicgoogle/baidu(bajp)
HVT-S-376.3092.8822.1M1.6G2240.875bicubicgoogle/baidu(rjch)
HVT-S-475.2192.3422.1M1.6G2240.875bicubicgoogle/baidu(ki4j)
mlp_mixer_b16_22476.6092.2360.0M12.7G2240.875bicubicgoogle/baidu(xh8x)
mlp_mixer_l16_22472.0687.67208.2M44.9G2240.875bicubicgoogle/baidu(8q7r)
resmlp_24_22479.3894.5530.0M6.0G2240.875bicubicgoogle/baidu(jdcx)
resmlp_36_22479.7794.8944.7M9.0G2240.875bicubicgoogle/baidu(33w3)
resmlp_big_24_22481.0495.02129.1M100.7G2240.875bicubicgoogle/baidu(r9kb)
resmlp_12_distilled_22477.9593.5615.3M3.0G2240.875bicubicgoogle/baidu(ghyp)
resmlp_24_distilled_22480.7695.2230.0M6.0G2240.875bicubicgoogle/baidu(sxnx)
resmlp_36_distilled_22481.1595.4844.7M9.0G2240.875bicubicgoogle/baidu(vt85)
resmlp_big_24_distilled_22483.5996.65129.1M100.7G2240.875bicubicgoogle/baidu(4jk5)
resmlp_big_24_22k_22484.4097.11129.1M100.7G2240.875bicubicgoogle/baidu(ve7i)
gmlp_s16_22479.6494.6319.4M4.5G2240.875bicubicgoogle/baidu(bcth)
ff_only_tiny (linear_tiny)61.2884.062240.875bicubicgoogle/baidu(mjgd)
ff_only_base (linear_base)74.8291.712240.875bicubicgoogle/baidu(m1jc)
repmlp_res50_light_22477.0193.4687.1M3.3G2240.875bicubicgoogle/baidu(b4fg)
cyclemlp_b178.8594.6015.1M2240.9bicubicgoogle/baidu(mnbr)
cyclemlp_b281.5895.8126.8M2240.9bicubicgoogle/baidu(jwj9)
cyclemlp_b382.4296.0738.3M2240.9bicubicgoogle/baidu(v2fy)
cyclemlp_b482.9696.3351.8M2240.875bicubicgoogle/baidu(fnqd)
cyclemlp_b583.2596.4475.7M2240.875bicubicgoogle/baidu(s55c)
convmixer_1024_2076.9493.3524.5M9.5G2240.96bicubicgoogle/baidu(qpn9)
convmixer_768_3280.1695.0821.2M20.8G2240.96bicubicgoogle/baidu(m5s5)
convmixer_1536_2081.3795.6251.8M72.4G2240.96bicubicgoogle/baidu(xqty)
convmlp_s76.7693.409.0M2.4G2240.875bicubicgoogle/baidu(3jz3)
convmlp_m79.0394.5317.4M4.0G2240.875bicubicgoogle/baidu(vyp1)
convmlp_l80.1595.0042.7M10.0G2240.875bicubicgoogle/baidu(ne5x)

Object Detection

Modelbackbonebox_mAPModel
DETRResNet5042.0google/baidu(n5gk)
DETRResNet10143.5google/baidu(bxz2)
Mask R-CNNSwin-T 1x43.7google/baidu(qev7)
Mask R-CNNSwin-T 3x46.0google/baidu(m8fg)
Mask R-CNNSwin-S 3x48.4google/baidu(hdw5)
Mask R-CNNpvtv2_b038.3google/baidu(3kqb)
Mask R-CNNpvtv2_b141.8google/baidu(k5aq)
Mask R-CNNpvtv2_b245.2google/baidu(jh8b)
Mask R-CNNpvtv2_b2_linear44.1google/baidu(8ipt)
Mask R-CNNpvtv2_b346.9google/baidu(je4y)
Mask R-CNNpvtv2_b447.5google/baidu(n3ay)
Mask R-CNNpvtv2_b547.4google/baidu(jzq1)

Semantic Segmentation

Pascal Context

ModelBackboneBatch_sizemIoU (ss)mIoU (ms+flip)Backbone_checkpointModel_checkpointConfigFile
SETR_NaiveViT_large1652.0652.57google/baidu(owoj)google/baidu(xdb8)config
SETR_PUPViT_large1653.9054.53google/baidu(owoj)google/baidu(6sji)config
SETR_MLAViT_Large854.3955.16google/baidu(owoj)google/baidu(wora)config
SETR_MLAViT_large1655.0155.87google/baidu(owoj)google/baidu(76h2)config

Cityscapes

ModelBackboneBatch_sizeIterationmIoU (ss)mIoU (ms+flip)Backbone_checkpointModel_checkpointConfigFile
SETR_NaiveViT_Large840k76.7179.03google/baidu(owoj)google/baidu(g7ro)config
SETR_NaiveViT_Large880k77.3179.43google/baidu(owoj)google/baidu(wn6q)config
SETR_PUPViT_Large840k77.9279.63google/baidu(owoj)google/baidu(zmoi)config
SETR_PUPViT_Large880k78.8180.43google/baidu(owoj)baidu(f793)config
SETR_MLAViT_Large840k76.7078.96google/baidu(owoj)baidu(qaiw)config
SETR_MLAViT_Large880k77.2679.27google/baidu(owoj)baidu(6bgj)config

ADE20K

ModelBackboneBatch_sizeIterationmIoU (ss)mIoU (ms+flip)Backbone_checkpointModel_checkpointConfigFile
SETR_NaiveViT_Large16160k47.5748.12google/baidu(owoj)baidu(lugq)config
SETR_PUPViT_Large16160k49.1249.51google/baidu(owoj)baidu(udgs)config
SETR_MLAViT_Large8160k47.8049.34google/baidu(owoj)baidu(mrrv)config
DPTViT_Large16160k47.21-google/baidu(owoj)baidu(ts7h)config
SegmenterViT_Tiny16160k38.45-TODObaidu(1k97)config
SegmenterViT_Small16160k46.07-TODObaidu(i8nv)config
SegmenterViT_Base16160k49.08-TODObaidu(hxrl)config
SegmenterViT_Large16160k51.82-TODObaidu(wdz6)config
Segmenter_LinearDeiT_Base16160k47.34-TODObaidu(5dpv)config
SegmenterDeiT_Base16160k49.27-TODObaidu(3kim)config
SegformerMIT-B016160k38.37-TODObaidu(ges9)config
SegformerMIT-B116160k42.20-TODObaidu(t4n4)config
SegformerMIT-B216160k46.38-TODObaidu(h5ar)config
SegformerMIT-B316160k48.35-TODObaidu(g9n4)config
SegformerMIT-B416160k49.01-TODObaidu(e4xw)config
SegformerMIT-B516160k49.73-TODObaidu(uczo)config
UperNetSwin_Tiny16160k44.9045.37-baidu(lkhg)config
UperNetSwin_Small16160k47.8848.90-baidu(vvy1)config
UperNetSwin_Base16160k48.5949.04-baidu(y040)config
UperNetCSwin_Tiny16160k49.46baidu(l1cp)baidu(y1eq)config
UperNetCSwin_Small16160k50.88baidu(6vwk)baidu(fz2e)config
UperNetCSwin_Base16160k50.64baidu(0ys7)baidu(83w3)config

Trans10kV2

ModelBackboneBatch_sizeIterationmIoU (ss)mIoU (ms+flip)Backbone_checkpointModel_checkpointConfigFile
Trans2seg_MediumResnet50c1616k75.97-google/baidu(4dd5)google/baidu(w25r)config

GAN

ModelFIDImage SizeCrop_pctInterpolationModel
styleformer_cifar102.73321.0lanczosgoogle/baidu(ztky)
styleformer_stl1015.65481.0lanczosgoogle/baidu(i973)
styleformer_celeba3.32641.0lanczosgoogle/baidu(fh5s)
styleformer_lsun9.681281.0lanczosgoogle/baidu(158t)

*The results are evaluated on Cifar10, STL10, Celeba and LSUNchurch dataset, using fid50k_full metric.

Quick Demo for Image Classification

To use the model with pretrained weights, go to the specific subfolder e.g., /image_classification/ViT/, then download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs.

Assume the downloaded weight file is stored in ./vit_base_patch16_224.pdparams, to use the vit_base_patch16_224 model in python:

from config import get_config
from visual_transformer import build_vit as build_model
# config files in ./configs/
config = get_config('./configs/vit_base_patch16_224.yaml')
# build model
model = build_model(config)
# load pretrained weights
model_state_dict = paddle.load('./vit_base_patch16_224.pdparams')
model.set_dict(model_state_dict)

:robot: See the README file in each model folder for detailed usages.

Evaluation

To evaluate ViT model performance on ImageNet2012 with a single GPU, run the following script using command line:

sh run_eval.sh

or

CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
    -cfg=./configs/vit_base_patch16_224.yaml \
    -dataset=imagenet2012 \
    -batch_size=16 \
    -data_path=/path/to/dataset/imagenet/val \
    -eval \
    -pretrained=/path/to/pretrained/model/vit_base_patch16_224  # .pdparams is NOT needed
<details> <summary> Run evaluation using multi-GPUs: </summary>
sh run_eval_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg=./configs/vit_base_patch16_224.yaml \
    -dataset=imagenet2012 \
    -batch_size=16 \
    -data_path=/path/to/dataset/imagenet/val \
    -eval \
    -pretrained=/path/to/pretrained/model/vit_base_patch16_224   # .pdparams is NOT needed
</details>

Training

To train the ViT model on ImageNet2012 with single GPU, run the following script using command line:

sh run_train.sh

or

CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
  -cfg=./configs/vit_base_patch16_224.yaml \
  -dataset=imagenet2012 \
  -batch_size=32 \
  -data_path=/path/to/dataset/imagenet/train
<details> <summary> Run training using multi-GPUs: </summary>
sh run_train_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg=./configs/vit_base_patch16_224.yaml \
    -dataset=imagenet2012 \
    -batch_size=16 \
    -data_path=/path/to/dataset/imagenet/train
</details>

Contributing

Licenses

Contact