Home

Awesome

Outlier-Aware Test-Time Adaptation with Stable Memory Replay

[paper]

Prerequisites

To use the repository, we provide a conda environment.

conda update conda
conda env create -f environment.yaml
conda activate stamp

Structure of Project

This project is based on a TTA-Benchmark containing several directories. Their roles are listed as follows:

Run

This repository allows to study a wide range of different datasets, models, settings, and methods. A quick overview is given below:

The dataset directory structure is as follows:

|-- datasets 

        |-- cifar-10

        |-- cifar-100

        |-- ImageNet

                |-- train

                |-- val

        |-- ImageNet-C

        |-- CIFAR-10-C

        |-- CIFAR-100-C
    
        |-- LSUN_resize-C
  
        |-- PLACES365-C

        |-- SVHN-C

        |-- Textures-C

        |-- Tiny-ImageNet-C

For OOD datasets, you can generate the corrupted datasets according to the instructions in this repository or robustbench.

Get Started

To run one of the following benchmarks, the corresponding datasets need to be downloaded.

Next, specify the root folder for all datasets _C.DATA_DIR = "./data" in the file conf.py.

download the checkpoints of pre-trained models from here and put it in ./ckpt

How to reproduce

The entry file for algorithms is test-time-eva-baseline.sh

To evaluate these methods, modify the DATASET and METHOD in test-time-eva.sh

and then

bash test-time-eva-baseline.sh

Acknowledgements