Home

Awesome

MViTv2: Improved Multiscale Vision Transformers for Classification and Detection

Official PyTorch implementation of MViTv2, from the following paper:

MViTv2: Improved Multiscale Vision Transformers for Classification and Detection. CVPR 2022.
Yanghao Li*, Chao-Yuan Wu*, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, Christoph Feichtenhofer*


MViT is a multiscale transformer which serves as a general vision backbone for different visual recognition tasks:

Image Classification: Included in this repo.

Object Detection and Instance Segmentation: See MViTv2 in Detectron2.

Video Action Recognition and Detection: See MViTv2 in PySlowFast.

Results and Pre-trained Models

ImageNet-1K trained models

nameresolutionacc@1#paramsFLOPs1k model
MViTv2-T224x22482.324M4.7Gmodel
MViTv2-S224x22483.635M7.0Gmodel
MViTv2-B224x22484.452M10.2Gmodel
MViTv2-L224x22485.3218M42.1Gmodel

ImageNet-21K trained models

nameresolutionacc@1#paramsFLOPs21k model1k model
MViTv2-B224x224-52M10.2Gmodel-
MViTv2-L224x22487.5218M42.1Gmodel-
MViTv2-H224x22488.0667M120.6Gmodel-

Installation

Please check INSTALL.md for installation instructions.

Training

Here we can train a standard MViTv2 model from scratch by:

python tools/main.py \
  --cfg configs/MViTv2_T.yaml \
  DATA.PATH_TO_DATA_DIR path_to_your_dataset \
  NUM_GPUS 8 \
  TRAIN.BATCH_SIZE 256 \

Evaluation

To evaluate a pretrained MViT model:

python tools/main.py \
  --cfg configs/test/MViTv2_T_test.yaml \
  DATA.PATH_TO_DATA_DIR path_to_your_dataset \
  NUM_GPUS 8 \
  TEST.BATCH_SIZE 256 \

Acknowledgement

This repository is built based on the PySlowFast.

License

MViT is released under the Apache 2.0 license.

Citation

If you find this repository helpful, please consider citing:

@inproceedings{li2021improved,
  title={MViTv2: Improved multiscale vision transformers for classification and detection},
  author={Li, Yanghao and Wu, Chao-Yuan and Fan, Haoqi and Mangalam, Karttikeya and Xiong, Bo and Malik, Jitendra and Feichtenhofer, Christoph},
  booktitle={CVPR},
  year={2022}
}

@inproceedings{fan2021multiscale,
  title={Multiscale vision transformers},
  author={Fan, Haoqi and Xiong, Bo and Mangalam, Karttikeya and Li, Yanghao and Yan, Zhicheng and Malik, Jitendra and Feichtenhofer, Christoph},
  booktitle={ICCV},
  year={2021}
}