Home

Awesome

<div align="center"> <h1>MIMDet &#127917;</h1> <h3>Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection</h3>

Yuxin Fang<sup>1</sup> *, Shusheng Yang<sup>1</sup> *, Shijie Wang<sup>1</sup> *, Yixiao Ge<sup>2</sup>, Ying Shan<sup>2</sup>, Xinggang Wang<sup>1 :email:</sup>,

<sup>1</sup> School of EIC, HUST, <sup>2</sup> ARC Lab, Tencent PCG.

(*) equal contribution, (<sup>:email:</sup>) corresponding author.

ICCV 2023 [paper]

</div>

News

Introduction

<p align="center"> <img src="MIMDet.png" width=80%> </p>

This repo provides code and pretrained models for MIMDet (Masked Image Modeling for Detection).

Models and Main Results

Mask R-CNN

<sub>Model<sub>Sample Ratio<sub>Schedule<sub>Aug<sub>Box AP<sub>Mask AP<sub>#params<sub>config<sub>model / log
<sub>MIMDet-ViT-B<sub>0.5<sub>3x<sub>[480-800, 1333] w/crop<sub>51.7<sub>46.2<sub>127.96M<sub>config<sub>model / log
<sub>MIMDet-ViT-L<sub>0.5<sub>3x<sub>[480-800, 1333] w/crop<sub>54.3<sub>48.2<sub>349.33M<sub>config<sub>model / log
<sub>Benchmarking-ViT-B<sub>-<sub>25ep<sub>[1024, 1024] LSJ(0.1-2)<sub>48.0<sub>43.0<sub>118.67M<sub>config<sub>model / log
<sub>Benchmarking-ViT-B<sub>-<sub>50ep<sub>[1024, 1024] LSJ(0.1-2)<sub>50.2<sub>44.9<sub>118.67M<sub>config<sub>model / log
<sub>Benchmarking-ViT-B<sub>-<sub>100ep<sub>[1024, 1024] LSJ(0.1-2)<sub>50.4<sub>44.9<sub>118.67M<sub>config<sub>model / log

Notes:

Installation

Prerequisites

Prepare

git clone https://github.com/hustvl/MIMDet.git
cd MIMDet
conda create -n mimdet python=3.9
conda activate mimdet

Dataset

MIMDet is built upon detectron2, so please organize dataset directory in detectron2's manner. We refer users to detectron2 for detailed instructions. The overall hierachical structure is illustrated as following:

MIMDet
├── datasets
│   ├── coco
│   │   ├── annotations
│   │   ├── train2017
│   │   ├── val2017
│   │   ├── test2017
│   ├── ...
├── ...

Training

Download the full MAE pretrained (including the decoder) ViT-B Model and ViT-L Model checkpoint. See MAE repo-issues-8.

# single-machine training
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> mae_checkpoint.path=<MAE_MODEL_PATH>

# multi-machine training
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --num-machines <MACHINE_NUM> --master_addr <MASTER_ADDR> --master_port <MASTER_PORT> mae_checkpoint.path=<MAE_MODEL_PATH>

Inference

# inference
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --eval-only train.init_checkpoint=<MODEL_PATH>

# inference with 100% sample ratio (please refer to our paper for detailed analysis)
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --eval-only train.init_checkpoint=<MODEL_PATH> model.backbone.bottom_up.sample_ratio=1.0

Acknowledgement

This project is based on MAE, Detectron2 and timm. Thanks for their wonderful works.

License

MIMDet is released under the MIT License.

Citation

If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)

@article{MIMDet,
  title={Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection},
  author={Fang, Yuxin and Yang, Shusheng and Wang, Shijie and Ge, Yixiao and Shan, Ying and Wang, Xinggang},
  journal={arXiv preprint arXiv:2204.02964},
  year={2022}
}