Awesome
Code for our ECCV (2020) paper A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation.
Prerequisites:
- python == 3.6.8
- pytorch ==1.1.0
- torchvision == 0.3.0
- numpy, scipy, PIL, argparse, tqdm
Dataset:
- Please manually download the datasets Office, Office-Home, ImageNet-Caltech from the official websites, and modify the path of images in each '.txt' under the folder './data/'.
- We adopt the same data protocol as PADA.
Training:
-
Partial Domain Adaptation (PDA) on the Office-Home dataset [Art(s=0) -> Clipart(t=1)]
python run_partial.py --s 0 --t 1 --dset office_home --net ResNet50 --cot_weight 1. --output run1 --gpu_id 0
-
Partial Domain Adaptation (PDA) on the Office dataset [Amazon(s=0) -> DSLR(t=1)]
python run_partial.py --s 0 --t 1 --dset office --net ResNet50 --cot_weight 5. --output run1 --gpu_id 0 python run_partial.py --s 0 --t 1 --dset office --net VGG16 --cot_weight 5. --output run1 --gpu_id 0
-
Partial Domain Adaptation (PDA) on the ImageNet-Caltech dataset [ImageNet(s=0) -> Caltech(t=1)]
python run_partial.py --s 0 --t 1 --dset imagenet_caltech --net ResNet50 --cot_weight 5. --output run1 --gpu_id 0
Citation
If you find this code useful for your research, please cite our paper
@inproceedings{liang2020baus,
title={A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation},
author={Liang, Jian, and Wang, Yunbo, and Hu, Dapeng, and He, Ran and Feng, Jiashi},
booktitle={European Conference on Computer Vision (ECCV)},
pages={xx-xx},
month = {August},
year={2020}
}
Acknowledgement
Some parts of this project are built based on the following open-source implementation