Home

Awesome

Vision Transformer with Progressive Sampling

This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Visual Parser

Installation Instructions

git clone git@github.com:yuexy/PS-ViT.git
cd PS-ViT
conda create -n ps_vit python=3.7 -y
conda activate ps_vit
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1 -c pytorch
pip3 install timm=0.3.4, einops, pyyaml
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
python setup.py build_ext --inplace

Results and Models

All models listed below are evaluated with input size 224x224

ModelTop1 Acc#paramsFLOPSDownload
PS-ViT-Ti/1475.64.8M1.6GComing Soon
PS-ViT-B/1080.621.3M3.1GComing Soon
PS-ViT-B/1481.721.3M5.4GGoogle Drive
PS-ViT-B/1882.321.3M8.8GGoogle Drive

Evaluation

To evaluate a pre-trained PS-ViT on ImageNet val, run:

python3 main.py <data-root> --model <model-name> -b <batch-size> --eval_checkpoint <path-to-checkpoint>

Training from scratch

To train a PS-ViT on ImageNet from scratch, run:

bash ./scripts/train_distributed.sh <job-name> <config-path> <num-gpus>

Citing PS-ViT

@article{psvit,
  title={Vision Transformer with Progressive Sampling},
  author={Yue, Xiaoyu and Sun, Shuyang and Kuang, Zhanghui and Wei, Meng and Torr, Philip and Zhang, Wayne and Lin, Dahua},
  journal={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

Contact

If you have any questions, don't hesitate to contact Xiaoyu Yue. You can easily reach him by sending an email to yuexiaoyu002@gmail.com.