Home

Awesome

Introduction

This repository implements Lee et al. Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild. In ICCV, 2019 in PyTorch.

@inproceedings{lee2019overcoming,
  title={Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild},
  author={Lee, Kibok and Lee, Kimin and Shin, Jinwoo and Lee, Honglak},
  booktitle={ICCV},
  year={2019}
}

This implementation also includes the state-of-the-art distillation-based methods for class-incremental learning (a.k.a. single-head continual learning):

Please see [training recipes] for replicating them.

Dependencies

Data

You may either generate datasets by yourself or download h5 files in the following links. You may not download external data if you don't want to use them. All data are assumed to be in data/{dataset}/. ({dataset} = cifar100, tiny, imagenet)

CIFAR-100 (Training data)

This will be automatically downloaded.

TinyImages (External data)

ImageNet (Training and external data)

Task splits

Train and test

Evaluation

Note