Home

Awesome

VisionTransformer

This repository contains PyTorch evaluation code, training code and pretrained models.

We will further upload results on other model architectures.

Getting Started

cd ./deit

Before using it, make sure you have the pytorch-image-models [timm] package by Ross Wightman installed. Note that our work relies of the augmentations proposed in this library.

SWIN

We provide models pretrained on ImageNet 2012 and finetune the checkpoint on segmentation datasets.

nameacc@1#paramsurl
SWIN-Large87.4197Mmodel

We finetune the checkpoint and get following results.

namemIoUmIoU (ms + flip)#paramsurl
ADE20K83.054.4234Mmodel, log
CityScapes82.983.9234Mmodel, log

For more details, please refer to README.

DEIT

Model Zoo

We provide models pretrained on ImageNet 2012. More models will be uploaded.

nameacc@1acc@5#paramsurl
VIT-B1282.996.386Mmodel
VIT-B2483.396.4172Mmodel
VIT-B12-38484.297.086Mmodel

We finetune the checkpoint proposed by VIT.

nameacc@1acc@5#paramsurl
VIT-L2483.996.7305Mmodel
VIT-L24-38485.496.7305Mmodel

Evaluate

For Deit-B12, run:

python -m torch.distributed.launch --nproc_per_node=XX --master_port=XX --use_env main.py --model deit_base_patch16_224 --aa rand-m9-mstd0.5-inc1  --input-size 224 --batch-size 16 --num_workers 2 --data-path path --output_dir output_dir --resume model.pth --eval

giving

Acc@1 82.928 Acc@5 96.342 loss 0.721

Train

The training code is not fully available and the results are currently not reproducable. Please wait for our updates.

For Deit-B12, run:

python -m torch.distributed.launch --nproc_per_node=XX --master_port=XX --use_env main.py --model deit_base_patch16_224 --aa rand-m9-mstd0.5-inc1  --input-size 224 --batch-size 72 --num_workers 4 --data-path path --output_dir output_dir -no-repeated-aug --epochs 300 --model-ema-decay 0.99996 --drop-path 0.5 --drop .0 --mixup .0 --mixup-switch-prob 0.0 

and further refine the model by

python -m torch.distributed.launch --nproc_per_node=XX --master_port=XX --use_env main.py --model deit_base_patch16_224 --aa rand-m9-mstd0.5-inc1  --input-size 224 --batch-size 72 --num_workers 4 --data-path path --output_dir output_dir -no-repeated-aug --start_epoch 300 --epochs 400 --resume model.pth --model-ema-decay 0.99996 --drop-path 0.75 --drop .0 --mixup .0 --mixup-switch-prob 0.0 

Finetune Models Trained on ImageNet-22k

python -m torch.distributed.launch --nproc_per_node=XX --master_port=XX --use_env main.py --model deit_large_patch16_224 --aa rand-n1-m1-mstd0.5-inc1 --input-size 224 --batch-size 16 --num_workers 1 --data-path path --output_dir output_dir -no-repeated-aug --smoothing 1e-6 --weight-decay 1e-8 --lr 5e-5 --start_epoch 0 --reprob 1e-6 --resume vit_checkpoint --epochs 40 --model-ema-decay 0.99996 --drop-path 0. --drop .0 --mixup .0 --mixup-switch-prob 0.0 --no-use-talk

evaluate

python -m torch.distributed.launch --nproc_per_node=XX --master_port=XX --use_env main.py --model deit_large_patch16_224 --aa rand-n1-m1-mstd0.5-inc1 --input-size 224 --batch-size 16 --num_workers 1 --data-path path --output_dir output_dir -no-repeated-aug --smoothing 1e-6 --weight-decay 1e-8 --lr 5e-5 --start_epoch 0 --reprob 1e-6 --resume vit_checkpoint --epochs 40 --model-ema-decay 0.99996 --drop-path 0. --drop .0 --mixup .0 --mixup-switch-prob 0.0 --no-use-talk --eval