Home

Awesome

Prompt Vision Transformer for Domain Generalization (DoPrompt)

Pytorch implementation of DoPrompt (Prompt Vision Transformer for Domain Generalization)

Overview

Architecture of Network:

framework

Training

Refer to DomainBed Readme for more details on commands running jobs. The training setting sweeps across multiple hyperparameters. Here we select some hyperparameters that can reach a good result. (Update 17/11/22: as many queries about the ERM baseline hyper-parameter, we present them below.)

# OfficeHome ERM
python -m domainbed.scripts.train --data_dir=./domainbed/data/ --steps 5001 --dataset OfficeHome --test_env 0/1/2/3 --algorithm ERM --output_dir results/exp \
     --hparams '{"lr": 1e-5, "lr_classifier": 1e-4}'
# OfficeHome
python -m domainbed.scripts.train --data_dir=./domainbed/data/ --steps 5001 --dataset OfficeHome --test_env 0/1/2/3 --algorithm DoPrompt --output_dir results/exp \
     --hparams '{"lr": 1e-5, "lr_classifier": 1e-3}'
# PACS ERM
python -m domainbed.scripts.train --data_dir=./domainbed/data/ --steps 5001 --dataset PACS --test_env 0/2/3 --algorithm ERM --output_dir results/exp \
     --hparams '{"lr": 5e-6, "lr_classifier": 5e-5}'
# PACS
python -m domainbed.scripts.train --data_dir=./domainbed/data/ --steps 5001 --dataset PACS --test_env 0/2/3 --algorithm DoPrompt --output_dir results/exp \
     --hparams '{"lr": 5e-6, "lr_classifier": 5e-5, "wd_classifier": 1e-5}'
# VLCS ERM
python -m domainbed.scripts.train --data_dir=./domainbed/data/ --steps 5001 --dataset VLCS --test_env 0/1/2/3 --algorithm ERM --output_dir results/exp \
     --hparams '{"lr": 5e-6, "lr_classifier": 5e-5}'
# VLCS
python -m domainbed.scripts.train --data_dir=./domainbed/data/ --steps 5001 --dataset VLCS --test_env 0/1/2/3 --algorithm DoPrompt --output_dir results/exp \
     --hparams '{"lr": 5e-6, "lr_classifier": 5e-6}'

Collect Results

python -m domainbed.scripts.collect_results --input_dir=results

Requirements

pip install -r domainbed/requirements.txt

Citation

@article{zheng2022prompt,
  title={Prompt Vision Transformer for Domain Generalization},
  author={Zheng, Zangwei and Yue, Xiangyu and Wang, Kai and You, Yang},
  journal={arXiv preprint arXiv:2208.08914},
  year={2022}
}

Acknowlegdement

This code is built on DomainBed. We thank the authors for sharing their codes.