Home

Awesome

PT4AL: Using Self-Supervised Pretext Tasks for Active Learning (ECCV2022) - Official Pytorch Implementation

PWC

Update Note

[solved problem]
We are redoing the CIFAR10 experiment.

The current reproduction result is the performance of 91 to 93.

We will re-tune the code again for stable performance in the near future.

The rest of the experiments confirmed that there was no problem with reproduction.

Sorry for the inconvenience.

Experiment Setting:

Prerequisites:

Python >= 3.7

CUDA = 11.0

PyTorch = 1.7.1

numpy >= 1.16.0

Running the Code

To generate train and test dataset:

python make_data.py

To train the rotation predition task on the unlabeled set:

python rotation.py

To extract pretext task losses and create batches:

python make_batches.py

To evaluate on active learning task:

python main.py

To mask cold start experiments (random)

python main_random.py

image

To mask cold start experiments (PT4AL)

python main_pt4al.py

image

Citation

If you use our code in your research, or find our work helpful, please consider citing us with the bibtex below:

@inproceedings{yi2022using,
  title = {Using Self-Supervised Pretext Tasks for Active Learning},
  author = {Yi, John Seon Keun and Seo, Minseok and Park, Jongchan and Choi, Dong-Geol},
  booktitle = {Proc. ECCV},
  year = {2022},
}