Home

Awesome

mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)

https://arxiv.org/abs/2302.00402

Introduction

we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding.

<div align="center"> <img src="assets/mplug2_overview.jpg" width="80%"> </div> <div align="center"> <img src="assets/framework.jpg" width="80%"> </div>

News

Models and Datasets

Pre-trained Models

ModelVisual BackboneText Enc LayersUniversal LayersFusion LayersText Dec Layers#paramsDownload
mPLUG-2ViT-L-142426120.9BmPLUG-2

Pre-train Datasets

COCOVGSBUCC3MCC13MWebvid2MWikiCorpus
image113K100K860K3M10M2M20G
text567K769K860K3M10M2M350G

Downstream Models

VideoQA

ModelDatasetAccuarcyDownload
mPLUG-2MSRVTT-QA48.0mPLUG-2
mPLUG-2MSVD-QA58.1mPLUG-2

Video Caption

ModelDatasetCIDERDownload
mPLUG-2MSRVTT80.3mPLUG-2
mPLUG-2MSVD165.8mPLUG-2

Requirements

pip install -r requirements.txt

Pre-training

Comming soon.

Fine-tuning

Video Question Answering

  1. Download MSRVTT-QA / MSVD-QA / TGIF datasets from the original websites.
  2. In configs_video/VideoQA_msrvtt_large.yaml, set the paths for the json files and the video paths.
  3. To perform evaluation, run:
<pre>sh scripts/inference_videoqa.sh</pre>
  1. To perform finetuning, run:
<pre>sh scripts/run_videoqa.sh</pre>

Video Captioning

  1. Download MSRVTT / MSVD datasets from the original websites.
  2. In configs_video/VideoCaption_msrvtt_large.yaml, set the paths for the json files and the video paths.
  3. To perform evaluation, run:
<pre>sh scripts/inference_videocaption.sh</pre>
  1. To perform finetuning, run:
<pre>sh scripts/run_videocaption.sh</pre>

Citation

If you found this work useful, consider giving this repository a star and citing our paper as followed:

@article{Xu2023mPLUG2AM,
  title={mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video},
  author={Haiyang Xu and Qinghao Ye and Ming Yan and Yaya Shi and Jiabo Ye and Yuanhong Xu and Chenliang Li and Bin Bi and Qi Qian and Wei Wang and Guohai Xu and Ji Zhang and Songfang Huang and Fei Huang and Jingren Zhou},
  journal={ArXiv},
  year={2023},
  volume={abs/2302.00402}
}

Acknowledgement

The implementation of mPLUG relies on resources from ALBEF, BLIP, and timm. We thank the original authors for their open-sourcing.