Home

Awesome

FQ-ViT [arXiv] [Slide]

This repo contains the official implementation of "FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer".

Table of Contents

Introduction

Transformer-based architectures have achieved competitive performance in various CV tasks. Compared to the CNNs, Transformers usually have more parameters and higher computational costs, presenting a challenge when deployed to resource-constrained hardware devices.

Most existing quantization approaches are designed and tested on CNNs and lack proper handling of Transformer-specific modules. Previous work found there would be significant accuracy degradation when quantizing LayerNorm and Softmax of Transformer-based architectures. As a result, they left LayerNorm and Softmax unquantized with floating-point numbers. We revisit these two exclusive modules of the Vision Transformers and discover the reasons for degradation. In this work, we propose the FQ-ViT, the first fully quantized Vision Transformer, which contains two specific modules: Power-of-Two Factor (PTF) and Log-Int-Softmax (LIS).

Layernorm quantized with Power-of-Two Factor (PTF)

These two figures below show that there exists serious inter-channel variation in Vision Transformers than CNNs, which leads to unacceptable quantization errors with layer-wise quantization.

<div align=center> <img src="./figures/inter-channel_variation.png" width="850px" /> </div>

Taking the advantages of both layer-wise and channel-wise quantization, we propose PTF for LayerNorm's quantization. The core idea of PTF is to equip different channels with different Power-of-Two Factors, rather than different quantization scales.

Softmax quantized with Log-Int-Softmax (LIS)

The storage and computation of attention map is known as a bottleneck for transformer structures, so we want to quantize it to extreme lower bit-width (e.g. 4-bit). However, if directly implementing 4-bit uniform quantization, there will be severe accuracy degeneration. We observe a distribution centering at a fairly small value of the output of Softmax, while only few outliers have larger values close to 1. Based on the following visualization, Log2 preserves more quantization bins than uniform for the small value interval with dense distribution.

<div align=center> <img src="./figures/distribution.png" width="400px" /> </div>

Combining Log2 quantization with i-exp, which is a polynomial approximation of exponential function presented by I-BERT, we propose LIS, an integer-only, faster, low consuming Softmax.

The whole process is visualized as follow.

<div align=center> <img src="./figures/log-int-softmax.png" width="400px" /> </div>

Getting Started

Install

git clone https://github.com/megvii-research/FQ-ViT.git
cd FQ-ViT
conda create -n fq-vit python=3.7 -y
conda activate fq-vit
conda install pytorch=1.7.1 torchvision cudatoolkit=10.1 -c pytorch

Data preparation

You should download the standard ImageNet Dataset.

├── imagenet
│   ├── train
│
│   ├── val

Run

Example: Evaluate quantized DeiT-S with MinMax quantizer and our proposed PTF and LIS.

python test_quant.py deit_small <YOUR_DATA_DIR> --quant --ptf --lis --quant-method minmax

Results on ImageNet

We compare our methods with several post-training quantization strategies, including MinMax, EMA, Percentile, OMSE, Bit-Split, and PTQ for ViT.

The following results are evaluated on ImageNet.

MethodW/A/Attn BitsDeiT-TDeiT-SDeiT-BViT-BViT-LSwin-TSwin-SSwin-B
Full Precision32/32/3272.2179.8581.8584.5385.8181.3583.2083.60
MinMax8/8/870.9475.0578.0223.643.3764.3874.3725.58
EMA8/8/871.1775.7178.8230.303.5370.8175.0528.00
Percentile8/8/871.4776.5778.3746.695.8578.7878.1240.93
OMSE8/8/871.3075.0379.5773.3911.3279.3078.9648.55
Bit-Split8/8/8-77.0679.42-----
PTQ for ViT8/8/8-77.4780.48-----
Ours8/8/871.6179.1781.2083.3185.0380.5182.7182.97
Ours8/8/471.0778.4080.8582.6884.8980.0482.4782.38

Citation

If you find this repo useful in your research, please consider citing the following paper:

@inproceedings{lin2022fqvit,
  title={FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer},
  author={Lin, Yang and Zhang, Tianyu and Sun, Peiqin and Li, Zheng and Zhou, Shuchang},
  booktitle={Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI-22}},
  pages={1173--1179},
  year={2022}
}

Join Us

Welcome to be a member (or an intern) of our team if you are interested in Quantization, Pruning, Distillation, Self-Supervised Learning and Model Deployment.

Please send your resume to sunpeiqin@megvii.com.