Home

Awesome

[CVPR-2024] LEAD: Learning Decomposition for Source-free Universal Domain Adaptation

Introduction

The main challenge for Source-free Universal Domain Adaptation (SF-UniDA) is determining whether covariate-shifted samples belong to target-private unknown categories. Existing methods tackle this either through hand-crafted thresholding or by developing time-consuming iterative clustering strategies. In this paper, we propose a new idea of LEArning Decomposition (LEAD), which decouples features into source-known and -unknown components to identify target-private data. Technically, LEAD initially leverages the orthogonal decomposition analysis for feature decomposition. Then, LEAD builds instance-level decision boundaries to adaptively identify target-private data. Extensive experiments across various UniDA scenarios have demonstrated the effectiveness and superiority of LEAD. Notably, in the OPDA scenario on VisDA dataset, LEAD outperforms GLC by 3.5% overall H-score and reduces 75% time to derive pseudo-labeling decision boundaries.

Framework

<img src="figures/LEAD_framework.png" width="1000"/>

Prerequisites

Dataset

We have conducted extensive expeirments on four datasets with three category shift scenario, i.e., Partial-set DA (PDA), Open-set DA (OSDA), and Open-partial DA (OPDA). The following is the details of class split for each scenario. Here, $\mathcal{Y}$, $\mathcal{\bar{Y}_s}$, and $\mathcal{\bar{Y}_t}$ denotes the source-target-shared class, the source-private class, and the target-private class, respectively.

DatasetsClass Split$\mathcal{Y}/\mathcal{\bar{Y}_s}/\mathcal{\bar{Y}_t}$
OPDAOSDAPDA
Office-3110/10/1110/0/1110/21/0
Office-Home10/5/5025/0/4025/40/0
VisDA-C6/3/36/0/66/6/0
DomainNet150/50/145

Please manually download these datasets from the official websites, and unzip them to the ./data folder. To ease your implementation, we have provide the image_unida_list.txt for each dataset subdomains.

./data
├── Office
│   ├── Amazon
|       ├── ...
│       ├── image_unida_list.txt
│   ├── Dslr
|       ├── ...
│       ├── image_unida_list.txt
│   ├── Webcam
|       ├── ...
│       ├── image_unida_list.txt
├── OfficeHome
│   ├── ...
├── VisDA
│   ├── ...

Step

  1. Please prepare the environment first.
  2. Please download the datasets from the corresponding official websites, and then unzip them to the ./data folder.
  3. Preparing the source model.
  4. Performing the target model adaptation.

Training

  1. Open-partial Domain Adaptation (OPDA) on Office, OfficeHome, and VisDA
# Source Model Preparing
bash ./scripts/train_source_OPDA.sh
# Target Model Adaptation
bash ./scripts/train_target_OPDA.sh
  1. Open-set Domain Adaptation (OSDA) on Office, OfficeHome, and VisDA
# Source Model Preparing
bash ./scripts/train_source_OSDA.sh
# Target Model Adaptation
bash ./scripts/train_target_OSDA.sh
  1. Partial-set Domain Adaptation (PDA) on Office, OfficeHome, and VisDA
# Source Model Preparing
bash ./scripts/train_source_PDA.sh
# Target Model Adaptation
bash ./scripts/train_target_PDA.sh

Citation

If you find our codebase helpful, please star our project and cite our paper:

@inproceedings{sanqing2024LEAD,
  title={LEAD: Learning Decomposition for Source-free Universal Domain Adaptation},
  author={Qu, Sanqing and Zou, Tianpei and He, Lianghua and Röhrbein, Florian and Knoll, Alois and Chen, Guang and Jiang, Changjun},
  booktitle={CVPR},
  year={2024},
}

@inproceedings{sanqing2023GLC,
  title={Upcycling Models under Domain and Category Shift},
  author={Qu, Sanqing and Zou, Tianpei and Röhrbein, Florian and Lu, Cewu and Chen, Guang and Tao, Dacheng and Jiang, Changjun},
  booktitle={CVPR},
  year={2023},
}

@inproceedings{sanqing2022BMD,
  title={BMD: A general class-balanced multicentric dynamic prototype strategy for source-free domain adaptation},
  author={Qu, Sanqing and Chen, Guang and Zhang, Jing and Li, Zhijun and He, Wei and Tao, Dacheng},
  booktitle={ECCV},
  year={2022}
}

Contact