Awesome
UniAdapter
[ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by Haoyu Lu, Yuqi Huo, Guoxing Yang, Zhiwu Lu, Wei Zhan, Masayoshi Tomizuka, Mingyu Ding.
<img src="UniAdapter.png" width="700"> <!-- ## Benckmark ### Image-text Retrieval: Tasks | MSCOCO | Flickr30K --- | :---: | :---: 14M | <a href="https://storage.googleapis.com/">Download</a>| - ### Visual Question Answerring: Tasks | VQA v2.0 --- | :---: 14M | <a href="https://storage.googleapis.com/">Download</a> ### Video-text Retrieval && VideoQA: Still working. -->Getting Started
- Python3, PyTorch>=1.8.0, torchvision>=0.7.0 are required for the current codebase.
- To install the other dependencies, run <pre/>pip install -r requirements.txt</pre>
Image-text Retrieval
-
Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
-
To parameter-efficient finetune on MSCOCO/Flickr:
- To evaluate UniAdapter on MSCOCO/Flickr:
Visual Question Answerring
-
Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa.yaml.
-
To parameter-efficient finetune on VQAv2:
- To evaluate UniAdapter on VQAv2 (need to update the result file to the official server):
Video-text Retrieval and VideoQA
- In progress.
Acknowledgement
Our codebase is built based on BLIP, timm. We thank the authors for the nicely organized code!