Awesome
Vision Transformer with Deformable Attention
This repository contains the code of object detection and instance segmentation for the paper Vision Transformer with Deformable Attention [arXiv], and DAT++: Spatially Dynamic Vision Transformerwith Deformable Attention (extended version)[arXiv].
This code is based on mmdetection and Swin Detection. To get started, you can follow the instructions in Swin Transformer.
Other links:
Dependencies
In addition to the dependencies of the classification codebase, the following packages are required:
- mmcv-full == 1.4.0
- mmdetection == 2.26.0
Evaluating Pretrained Models
RetinaNet
Backbone | Schedule | bbox mAP | mask mAP | config | pretrained weights |
---|---|---|---|---|---|
DAT-T++ | 1x | 46.8 | - | config | OneDrive / TsinghuaCloud |
DAT-T++ | 3x | 49.2 | - | config | OneDrive / TsinghuaCloud |
DAT-S++ | 1x | 48.3 | - | config | OneDrive / TsinghuaCloud |
DAT-S++ | 3x | 50.2 | - | config | OneDrive / TsinghuaCloud |
Mask R-CNN
Backbone | Schedule | bbox mAP | mask mAP | config | pretrained weights |
---|---|---|---|---|---|
DAT-T++ | 1x | 48.7 | 43.7 | config | OneDrive / TsinghuaCloud |
DAT-T++ | 3x | 50.5 | 45.1 | config | OneDrive / TsinghuaCloud |
DAT-S++ | 1x | 49.8 | 44.5 | config | OneDrive / TsinghuaCloud |
DAT-S++ | 3x | 51.2 | 45.7 | config | OneDrive / TsinghuaCloud |
Cascade Mask R-CNN
Backbone | Schedule | bbox mAP | mask mAP | config | pretrained weights |
---|---|---|---|---|---|
DAT-T++ | 1x | 52.2 | 45.0 | config | OneDrive / TsinghuaCloud |
DAT-T++ | 3x | 53.0 | 46.0 | config | OneDrive / TsinghuaCloud |
DAT-S++ | 3x | 54.2 | 46.9 | config | OneDrive / TsinghuaCloud |
DAT-B++ | 3x | 54.5 | 47.0 | config | OneDrive / TsinghuaCloud |
To evaluate a pretrained checkpoint, please download the pretrain weights to your local machine and run the mmdetection test scripts as follows:
# single-gpu testing
python tools/test.py <CONFIG_FILE> <DET_CHECKPOINT_FILE> --eval bbox segm
# multi-gpu testing
bash tools/dist_test.sh <CONFIG_FILE> <DET_CHECKPOINT_FILE> <GPU_NUM> --eval bbox segm
Please notice: Before training or evaluation, please set the data_root
variable in configs/_base_/datasets/coco_detection.py
(RetinaNet) and configs/_base_/datasets/coco_instance.py
(Mask R-CNN & Cascade Mask R-CNN) to the path where MS-COCO data stores.
Since evaluating models needs no pretrain weights, you can set the pretrained = None
in <CONFIG_FILE>
.
Training
To train a detector with pre-trained models, run:
# single-gpu training
python tools/train.py <CONFIG_FILE>
# multi-gpu training
bash tools/dist_train.sh <CONFIG_FILE> <GPU_NUM>
Please notice: Make sure the pretrained
variable in <CONFIG_FILE>
is correctly set to the path of pretrained DAT model.
In our experiments, we typically use 4 nodes of NVIDIA A100 GPU (40GB) to train the models, so the learning rates are scaled to 4 times of the default values for each detector.
Acknowledgements
This code is developed on the top of Swin Transformer, we thank to their efficient and neat codebase. The computational resources supporting this work are provided by Hangzhou High-Flyer AI Fundamental Research Co.,Ltd.
Citation
If you find our work is useful in your research, please consider citing:
@article{xia2023dat,
title={DAT++: Spatially Dynamic Vision Transformer with Deformable Attention},
author={Zhuofan Xia and Xuran Pan and Shiji Song and Li Erran Li and Gao Huang},
year={2023},
journal={arXiv preprint arXiv:2309.01430},
}
@InProceedings{Xia_2022_CVPR,
author = {Xia, Zhuofan and Pan, Xuran and Song, Shiji and Li, Li Erran and Huang, Gao},
title = {Vision Transformer With Deformable Attention},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {4794-4803}
}
Contact
If you have any questions or concerns, please send email to xzf23@mails.tsinghua.edu.cn.