Home

Awesome

Official implementation for TransDA

Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Accepted by APIN 2022

Overview:

<img src="image/overview.png" width="1000"/>

Result:

<img src="image/result_office31.png" width="1000"/> <img src="image/result_officehome.png" width="1000"/>

Prerequisites:

Prepare pretrain model

We choose R50-ViT-B_16 as our encoder.

wget https://storage.googleapis.com/vit_models/imagenet21k/R50+ViT-B_16.npz 
mkdir ./model/vit_checkpoint/imagenet21k 
mv R50+ViT-B_16.npz ./model/vit_checkpoint/imagenet21k/R50+ViT-B_16.npz

Our checkpoints could be find in Dropbox

Dataset:

Training

Office-31

```python
sh run_office_uda.sh
```

Office-Home

```python
sh run_office_home_uda.sh
```

Office-VisDA

```python
sh run_visda.sh
```

Reference

ViT

TransUNet

SHOT