Home

Awesome

<div align=center> <img src="overview.png" width="850px" /> </div>

Patch Similarity Aware Data-Free Quantization for Vision Transformers

This repository contains the official PyTorch implementation for the ECCV 2022 paper "Patch Similarity Aware Data-Free Quantization for Vision Transformers". To the best of our knowledge, this is the first work on data-free quantization for vision transformers. Below are instructions for reproducing the results.

Installation

git clone https://github.com/zkkli/PSAQ-ViT.git
cd PSAQ-ViT

Quantization

python test_quant.py [--model] [--dataset] [--w_bit] [--a_bit] [--mode]

optional arguments:
--model: Model architecture, the choises can be: 
         deit_tiny, deit_small, deit_base, swin_tiny, and swin_small.
--dataset: Path to ImageNet dataset.
--w_bit: Bit-precision of weights, default=8.
--a_bit: Bit-precision of activation, default=8.
--mode: Mode of calibration data,
        0: Generated fake data (PSAQ-ViT)
        1: Gaussian noise
        2: Real data
python test_quant.py --model deit_base --dataset <YOUR_DATA_DIR> --mode 0
python test_quant.py --model deit_base --dataset <YOUR_DATA_DIR> --mode 1
python test_quant.py --model deit_base --dataset <YOUR_DATA_DIR> --mode 2

Results

Below are the experimental results of our proposed PSAQ-ViT that you should get on ImageNet dataset using an RTX 3090 GPU.

ModelPrec.Top-1(%)Prec.Top-1(%)
DeiT-T (72.21)W4/A865.57W8/A871.56
DeiT-S (79.85)W4/A873.23W8/A876.92
DeiT-B (81.85)W4/A877.05W8/A879.10
Swin-T (81.35)W4/A871.79W8/A875.35
Swin-S (83.20)W4/A875.14W8/A876.64

Citation

We appreciate it if you would please cite the following paper if you found the implementation useful for your work:

@inproceedings{li2022psaqvit,
  title={Patch Similarity Aware Data-Free Quantization for Vision Transformers},
  author={Li, Zhikai and Ma, Liping and Chen, Mengjuan and Xiao, Junrui and Gu, Qingyi},
  booktitle={European Conference on Computer Vision},
  pages={154--170},
  year={2022}
}