Home

Awesome

Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization [NeurIPS 2023]

Jameel Hassan *, Hanan Gani *, Noor Hussein, Uzair Khattak, Muzammal Naseer, Fahad Khan, Salman Khan

paper Poster Slides Video

Official implementation of the paper "Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization".

<hr>

Contents

  1. Updates
  2. Highlights
  3. Main Contributions
  4. Installation
  5. Datasets
  6. Run PromptAlign
  7. Results
  8. Citation
  9. Contact
  10. Acknowledgements
<hr>

Updates

Highlights

concept-diagram

Abstract: The promising zero-shot generalization of vision-language models such as CLIP has led to their adoption using prompt learning for numerous downstream tasks. Previous works have shown test-time prompt tuning using entropy minimization to adapt text prompts for unseen domains. While effective, this overlooks the key cause for performance degradation to unseen domains – distribution shift. In this work, we explicitly handle this problem by aligning the out-of-distribution (OOD) test sample statistics to those of the source data using prompt tuning. We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain. Evaluating against the domain generalization benchmark, our method improves zero-shot top-1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe. In cross-dataset generalization with unseen categories across 10 datasets, our method improves by 1.82% compared to the existing state-of-the-art.

<hr>

Main Contributions

Installation

For installation and other package requirements, please follow the instructions detailed in INSTALL.md

Data Preparation

Please follow the instructions at DATASETS.md to prepare all datasets.

Run PromptAlign

Please refer to the RUN.md for detailed instructions on training, evaluating and reproducing the results using our pre-trained models.

Results

Domain Generalization

<div align="center">
MethodIN-V2IN-SketchIN-AIN-ROOD Average
CLIP60.8646.0647.8773.9857.20
CoOp64.2047.9949.7175.2159.28
CoCoOp64.0748.7550.6376.1859.91
MaPLe64.0749.1550.9076.9860.28
TPT + CLIP64.3547.9454.7777.0660.81
TPT + CoOp66.8349.2957.9577.2762.84
TPT + CoCoOp64.8548.2758.4778.6562.61
TPT + MaPLe64.8748.1658.0878.1262.31
PromptAlign65.2956.2359.3779.3363.55
</div>

Citation

If you use our work, please consider citing:

@inproceedings{samadh2023align,
  title={Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization},
  author={Samadh, Jameel Hassan Abdul and Gani, Hanan and Hussein, Noor Hazim and Khattak, Muhammad Uzair and Naseer, Muzammal and Khan, Fahad and Khan, Salman},
  booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
  year={2023}
}

Contact

Should you have any questions, please create an issue in this repository or contact at jameel.hassan@mbzuai.ac.ae or hanan.ghani@mbzuai.ac.ae

Acknowledgements

We thank the authors of MaPLe, TPT, and CoOp and CoCoOp for their open-source implementation and instructions on data preparation.