Home

Awesome

Awesome-Prompt-Adapter-Learning-for-VLMs

A curated list of prompt/adapter learning methods for vision-language models (e.g., CLIP).

Table of Contents

💡Tips:

Keywords

Use text-based prompts/adapters.

Use image-based prompts/adapters.

Use text- and image-based prompts/adapters.

Surveys

General Prompt Learning

Experimental Comparison

Base-to-Novel Generalization. (ViT-B/16 CLIP)

MethodsPubBaseNovelHM (main)Code
CLIPICML 2169.3474.2271.70Link
CoOpIJCV 2282.6963.2271.66Link
CoCoOpCVPR 2280.4771.6975.83Link
ProDACVPR 2281.5672.3076.65Link
KgCoOpCVPR 2380.7373.6077.00Link
RPOICCV 2381.1375.0077.78Link
MaPLeCVPR 2382.2875.1478.55Link
DePTCVPR 2483.6275.0479.10Link
TCPCVPR 2484.1375.3679.51Link
MMACVPR 2483.2076.8079.87Link
PromptSRCICCV 2384.2676.1079.97Link
HPTAAAI 2484.3276.8680.23Link
CoPromptICLR 2484.0077.2380.48Link
CasPLECCV 2486.1179.5482.69Link
PromptKDCVPR 2486.9680.7383.73Link

Table 1. Average results on 11 datasets. (Only works with open-source code will be listed.)

Paper List

2022

2023

2024

Another form of Prompt

Paper List

General Test-time Prompt Learning

Experimental Comparison

MethodsPubImageNet-A-V2-R-SAvg. (main)Code
CoOpIJCV 2271.5149.7164.2075.2147.9959.28Link
CoCoOpCVPR 2271.0250.6364.0776.1848.7559.91Link
TPTNeurIPS 2268.9854.7763.4577.0647.9460.81Link
TPT+CoOpNeurIPS 2273.6157.9566.8377.2749.2962.84Link
PromptAlignNeurIPS 23---59.3765.2979.3359.3763.55Link
TPS+CoOpArxiv 2473.7360.4966.8477.4449.0865.52Link
RLCFICLR 2473.2365.4569.7783.3554.7468.33Link
RLCF+CoOpICLR 2476.0569.7470.6284.5156.4970.34Link

Table 2. Test-time prompt tuning methods on OOD data.

Paper List

General Adapter Learning

Paper List

Video Understanding

Prompt Learning

Continual Learning

Prompt Learning

Adapter Learning

<!--- `RAIL` **Advancing Cross-domain Discriminability in Continual Learning of Vison-Language Models.** Arxiv 2024. [[Paper](https://arxiv.org/pdf/2406.18868)] - `SEMA` **Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning.** Arxiv 2024. [[Paper](https://arxiv.org/pdf/2403.18886)] -->

Others