Awesome
Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Usage
First, clone the repository locally:
git clone https://github.com/facebookresearch/detr.git
Then, install PyTorch 1.5+ and torchvision 0.6+:
conda install -c pytorch pytorch torchvision
Install pycocotools (for evaluation on COCO) and scipy (for training):
conda install cython scipy
pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
That's it, should be good to train and evaluate detection models.
(optional) to work with panoptic install panopticapi:
pip install git+https://github.com/cocodataset/panopticapi.git
Data preparation
Download and extract COCO 2017 train and val images with annotations from http://cocodataset.org. We expect the directory structure to be the following:
path/to/coco/
annotations/ # annotation json files
train2017/ # train images
val2017/ # val images
Evaluation
Modifications in the script evaluate_coco.sh
:
- modify the --coco_path to your coco dataset
- define the --n_bit to 2/3/4-bit
bash evaluate_coco.sh
Model Zoo
- Q-DETR based on SMCA-DETR on COCO
Methods | Bit-width | Epoch | box AP | Model Link |
---|---|---|---|---|
Real-valued) | 32-bit | 50 | 41.0 | - |
Q-DETR | 4-bit | 50 | 38.5 | Model |
Q-DETR | 2-bit | 50 | 32.4 | Model |
Citation
If you find this repository useful, please consider citing our work:
@inproceedings{xu2023q,
title={Q-DETR: An Efficient Low-Bit Quantized Detection Transformer},
author={Xu, Sheng and Li, Yanjing and Lin, Mingbao and Gao, Peng and Guo, Guodong and L{\"u}, Jinhu and Zhang, Baochang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3842--3851},
year={2023}
}
Acknowledege
The project are borrowed heavily from SMCA-DETR.