Home

Awesome

<div align=center> <img src="imgs/overview.png" width="500px" /> </div>

RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers

This repository contains the official PyTorch implementation for the ICCV2023 paper "RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers". RepQ-ViT decouples the quantization and inference processes and applies scale reparameterization to solve the extreme distribution issues in vision transformers, including post-LayerNorm and post-Softmax activations as follows:

Installation

Quantization

Please see classification readme for instructions to reproduce classification results on ImageNet and see detection readme for instructions to reproduce detection results on COCO.

Citation

We appreciate it if you would please cite the following paper if you found the implementation useful for your work:

@inproceedings{li2023repq,
  title={Repq-vit: Scale reparameterization for post-training quantization of vision transformers},
  author={Li, Zhikai and Xiao, Junrui and Yang, Lianwei and Gu, Qingyi},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={17227--17236},
  year={2023}
}