Awesome
Multimodal Prompting with Missing Modalities for Visual Recognition (CVPR 2023)
Official PyTorch implementaton of CVPR 2023 paper "Multimodal Prompting with Missing Modalities for Visual Recognition".
You can visit our project website here.
Introduction
In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model.
<div align="center"> <img src="fig/model.jpeg"/> </div>Usage
Enviroment
Prerequisites
Python = 3.7.13
Pytorch = 1.10.0
CUDA = 11.3
Other requirements
pip install -r requirements.txt
Prepare Dataset
We use three vision and language datasets: MM-IMDb, UPMC Food-101, and Hateful Memes. Please download the datasets by yourself. We use pyarrow
to serialize the datasets, the conversion codes are located in vilt/utils/wirte_*.py
. Please see DATA.md
to organize the datasets, otherwise you may need to revise the write_*.py
files to meet your dataset path and files. Run the following script to create the pyarrow binary file:
python make_arrow.py --dataset [DATASET] --root [YOUR_DATASET_ROOT]
Evaluation
python run.py with data_root=<ARROW_ROOT> \
num_gpus=<NUM_GPUS> \
num_nodes=<NUM_NODES> \
per_gpu_batchsize=<BS_FITS_YOUR_GPU> \
<task_finetune_mmimdb or task_finetune_food101 or task_finetune_hatememes> \
load_path=<MODEL_PATH> \
exp_name=<EXP_NAME> \
prompt_type=<PROMPT_TYPE> \
test_ratio=<TEST_RATIO> \
test_type=<TEST_TYPE> \
test_only=True
Train
-
Download the pre-trained ViLT model weights from here.
-
Start to train.
python run.py with data_root=<ARROW_ROOT> \
num_gpus=<NUM_GPUS> \
num_nodes=<NUM_NODES> \
per_gpu_batchsize=<BS_FITS_YOUR_GPU> \
<task_finetune_mmimdb or task_finetune_food101 or task_finetune_hatememes> \
load_path=<PRETRAINED_MODEL_PATH> \
exp_name=<EXP_NAME>
Citation
If you find this work useful for your research, please cite:
@inproceedings{lee2023cvpr,
title = {Multimodal Prompting with Missing Modalities for Visual Recognition},
author = {Yi-Lun Lee and Yi-Hsuan Tsai and Wei-Chen Chiu and Chen-Yu Lee},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023}
}
Acknowledgements
This code is based on ViLT.