Awesome
MMICT: Boosting Multi-Modal Fine-Tuning with In-Context Examples
Source code for TOMM 2024 paper "MMICT: Boosting Multi-Modal Fine-Tuning with In-Context Examples" [arXiv preprint].
Environment
The required environment is included in requirements.txt
.
Dataset Preparation
We train and test our model on:
How to run
To train the model:
bash run.sh
Acknowledgments
We thank the developers of LAVIS, BLIP-2, CLIP, for their public code release.