Home

Awesome

LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition (AAAI 2024)

This is Pytorch implementation of our paper "LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition".

Pre-trained Models

Backbone# of Location StageAccuracyCheckpoints Google LinksCheckpoints Baidu Links
Deit-S7x780.8(m=5,threshold=0.76)Google Drive Baidu Drive (v435)
Deit-S9x982.2(m=8,threshold=0.75)Google Drive Baidu Drive (b69i)
**.pth
├── model: state dictionaries of the model
├── flops: a list containing the GFLOPs corresponding to exiting at each stage
├── anytime_classification: Top-1 accuracy of each stage
├── budgeted_batch_classification: results of budgeted batch classification (a two-item list, [0] and [1] correspond to the two coordinates of a curve)

Requirements

Data Preparation

ImageNet
├── train
│   ├── folder 1 (class 1)
│   ├── folder 2 (class 2)
│   ├── ...
├── val
│   ├── folder 1 (class 1)
│   ├── folder 2 (class 2)
│   ├── ...

Evaluate Pre-trained Models

CUDA_VISIBLE_DEVICES=0 python dynamic_inference.py --eval-mode 0 --data_url PATH_TO_IMAGENET  --batch_size 64 --model lf_deit_small --checkpoint_path PATH_TO_CHECKPOINT  --location-stage-size {7,9} 

CUDA_VISIBLE_DEVICES=0 python dynamic_inference.py --eval-mode 1 --data_url PATH_TO_IMAGENET  --batch_size 64 --model lf_deit_small --checkpoint_path PATH_TO_CHECKPOINT  --location-stage-size {7,9} 

CUDA_VISIBLE_DEVICES=0 python dynamic_inference.py --eval-mode 2 --data_url PATH_TO_IMAGENET  --batch_size 1024 --model lf_deit_small --checkpoint_path PATH_TO_CHECKPOINT  --location-stage-size {7,9} --threshold THRESHOLD

CUDA_VISIBLE_DEVICES=0 python dynamic_inference.py --eval-mode 3 --data_url PATH_TO_IMAGENET  --batch_size 64 --model lf_deit_small --checkpoint_path PATH_TO_CHECKPOINT  --location-stage-size {7,9} 

Train

python -m torch.distributed.launch --use_env --nproc_per_node=4 main_deit.py  --model lf_deit_small --batch-size 256 --data-path PATH_TO_IMAGENET --location-stage-size {7,9} --dist-eval --output PATH_TO_LOG

Visualization

python visualize.py --model lf_deit_small --resume  PATH_TO_CHECKPOINT --output_dir PATH_TP_SAVE --data-path PATH_TO_IMAGENET --batch-size 64 

Citation

@inproceedings{LFViT, 
title={LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition}, 
volume={38}, 
author={Youbing Hu and Yun Cheng and Anqi Lu and Zhiqiang Cao and Dawei Wei and Jie Liu and Zhijun Li}, 
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2024}, 
pages={2274-2284} 
}

Acknowledgment

Our code of DeiT is from here. The visualization code is modified from Evo-ViT. The dynamic inference with early-exit code is modified from DVT. Thanks to these authors.