Home

Awesome

Prompt-based Distribution Alignment [AAAI 2024]

PWC PWC PWC

arXiv

<!-- ![GitHub stars](https://img.shields.io/github/stars/BaiShuanghao/Prompt-based-Distribution-Alignment) ![GitHub forks](https://img.shields.io/github/forks/BaiShuanghao/Prompt-based-Distribution-Alignment) -->

Official implementation of the paper "Prompt-based Distribution Alignment for Unsupervised Domain Adaptation".

Authors: Shuanghao Bai, Min Zhang, Wanqi Zhou, Siteng Huang, Zhirong Luan, Donglin Wang, Badong Chen.

<hr />

Highlights

main figure

<p align="justify"> Abstract: Recently, despite the unprecedented success of large pre-trained visual-language models (VLMs) on a wide range of downstream tasks, the real-world unsupervised domain adaptation (UDA) problem is still not well explored. Therefore, in this paper, we first experimentally demonstrate that the unsupervised-trained VLMs can significantly reduce the distribution discrepancy between source and target domains, thereby improving the performance of UDA. However, a major challenge for directly deploying such models on downstream UDA tasks is prompt engineering, which requires aligning the domain knowledge of source and target domains, since the performance of UDA is severely influenced by a good domain-invariant representation. We further propose a Prompt-based Distribution Alignment (PDA) method to incorporate the domain knowledge into prompt learning. Specifically, PDA employs a two-branch prompt-tuning paradigm, namely base branch and alignment branch. The base branch focuses on integrating class-related representation into prompts, ensuring discrimination among different classes. To further minimize domain discrepancy, for the alignment branch, we construct feature banks for both the source and target domains and propose image-guided feature tuning (IFT) to make the input attend to feature banks, which effectively integrates self-enhanced and cross-domain features into the model. In this way, these two branches can be mutually promoted to enhance the adaptation of VLMs for UDA. We conduct extensive experiments on three benchmarks to demonstrate that our proposed PDA achieves state-of-the-art performance. </p>

<details> <summary>Main Contributions</summary>
  1. We first experimentally verify the effectiveness of VLM on UDA downstream tasks. Then, based on this finding, we further propose a prompt-based distribution alignment (PDA) method to tune prompt to the target domain.
  2. The proposed PDA includes two training branches. First, the base branch ensures discrimination among different classes. Second, the aligned branch obtains the domain-invariant information by image-guided feature tuning.
  3. Extensive experiments demonstrate the effectiveness of the proposed PDA, which achieves state-of-the-art performance on Office-Home, Office-31 and VisDA-2017.
</details>

Results

PDA in comparison with existing prompt tuning methods

Results reported below show accuracy across 3 UDA datasets with ViT-B/16 backbone. Our PDA method adopts the paradigm of multi-modal prompt tuning.

MethodOffice-Home Acc.Office-31 Acc.VisDA-2017 Acc.
CLIP82.177.588.9
CoOp83.989.482.7
CoCoOp84.188.984.2
VP81.777.488.7
VPT-deep83.989.486.2
MaPLe84.289.683.5
DAPL84.481.289.5
PDA (Ours)85.791.289.7

Installation

For installation and other package requirements, please follow the instructions as follows. This codebase is tested on Ubuntu 18.04 LTS with python 3.7. Follow the below steps to create environment and install dependencies.

# Create a conda environment
conda create -y -n pda python=3.7

# Activate the environment
conda activate pda

# Install torch (requires version >= 1.8.1) and torchvision
# Please refer to https://pytorch.org/get-started/previous-versions/ if your cuda version is different
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
# Instructions borrowed from https://github.com/KaiyangZhou/Dassl.pytorch#installation

# Clone this repo
git clone https://github.com/KaiyangZhou/Dassl.pytorch.git
cd Dassl.pytorch

# Install dependencies
pip install -r requirements.txt

# Install this library (no need to re-build if the source code is modified)
python setup.py develop
cd ..
# Clone PDA code base
git clone https://github.com/BaiShuanghao/Prompt-based-Distribution-Alignment.git
cd Prompt-based-Distribution-Alignment

# Install requirements
pip install -r requirements.txt

Data Preparation

Please follow the instructions to prepare all datasets. Datasets list:

Training and Evaluation

Please follow the instructions for training, evaluating, and reproducing the results. Firstly, you need to modify the directory of data by yourself.

Training

# Example: trains on Office-Home dataset, and the source domian is art and the target domain is clipart (a-c)
bash scripts/pda/main_pda.sh officehome b32_ep10_officehome PDA ViT-B/16 2 a-c 0

Evaluation

# Example: evaluates on Office-Home dataset, and the source domian is art and the target domain is clipart (a-c)
bash scripts/pda/eval_pda.sh officehome b32_ep10_officehome PDA ViT-B/16 2 a-c 0

The details are in each method folder in scripts folder.

Supported Methods

Supported methods in this codespace are as follows:

MethodPaperCodeScript
CoOpIJCV 2022linklink
CoCoOpCVPR 2022linklink
VP-link-
VPTECCV 2022linklink
IVLP & MaPLeCVPR 2023linklink & link
DAPLTNNLS 2023linklink

Citation

If our code is helpful to your research or projects, please consider citing:

@inproceedings{bai2024prompt,
  title={Prompt-based distribution alignment for unsupervised domain adaptation},
  author={Bai, Shuanghao and Zhang, Min and Zhou, Wanqi and Huang, Siteng and Luan, Zhirong and Wang, Donglin and Chen, Badong},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={2},
  pages={729--737},
  year={2024}
}

Contact

If you have any questions, please create an issue on this repository or contact at baishuanghao@stu.xjtu.edu.cn.

Acknowledgements

Our style of reademe refers to MaPLe. And our code is based on CoOp and CoCoOp, DAPL and MaPLe etc. repository. We thank the authors for releasing their codes. If you use their codes, please consider citing these works as well.