Home

Awesome

UniAdapter

[ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by Haoyu Lu, Yuqi Huo, Guoxing Yang, Zhiwu Lu, Wei Zhan, Masayoshi Tomizuka, Mingyu Ding.

<img src="UniAdapter.png" width="700"> <!-- ## Benckmark ### Image-text Retrieval: Tasks | MSCOCO | Flickr30K --- | :---: | :---: 14M | <a href="https://storage.googleapis.com/">Download</a>| - ### Visual Question Answerring: Tasks | VQA v2.0 --- | :---: 14M | <a href="https://storage.googleapis.com/">Download</a> ### Video-text Retrieval && VideoQA: Still working. -->

Getting Started

Image-text Retrieval

<pre>python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py --config ./configs/retrieval_{coco, flickr}.yaml --output_dir output/{coco, flickr} </pre> <pre>python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py --config ./configs/retrieval_{coco, flickr}.yaml --output_dir output/{coco, flickr} --evaluate </pre>

Visual Question Answerring

<pre>python -m torch.distributed.run --nproc_per_node=8 train_vqa.py --config ./configs/vqa.yaml --output_dir $static_dir</pre> <pre>python -m torch.distributed.run --nproc_per_node=8 train_vqa.py --config ./configs/vqa.yaml --output_dir $static_dir --evaluate </pre>

Video-text Retrieval and VideoQA

Acknowledgement

Our codebase is built based on BLIP, timm. We thank the authors for the nicely organized code!