Home

Awesome

Core-tuning

This repository is the official implementation of "Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning" (NeurIPS 2021).

The key contributions of this paper are threefold:

The implementation is as follows.

1. Requirements

pip install -r requirements.txt

2. Pretrained models

checkpoint = torch.load(""./checkpoint/pretrain_moco_v2.pkl"")['model']
self.extractor.load_state_dict({k.replace('module.encoder.',''):v for k,v in checkpoint.items()},strict=False) 

3. Datasets

4. Training

python Core-tuning.py -a resnet50-ssl --gpu 0 -d cifar10 --eta_weight 0.1 --mixup_alpha 1  --checkpoint checkpoint/ssl-core-tuning/Core_eta0.1_alpha1 --train-batch 64 --accumulate_step 4 --test-batch 100  

5. Evaluation

python Core-tuning.py -a resnet50-ssl --gpu 0 -d cifar10 --test-batch 100 --evaluate --checkpoint checkpoint/Core-tuning-model/ --resume checkpoint/Core-tuning-model/Core-tuning-model.tar

6. Results

MethodsTop 1 Accuracy
CE-tuning94.70+/-0.39
Core-tuning (ours)97.31+/-0.10
<p align="left"> <img src="visualization.jpg" height=200> </p>

7. Citaiton

If you find our work inspiring or use our codebase in your research, please cite our work.

@inproceedings{zhang2021unleashing,
  title={Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning},
  author={Zhang, Yifan and Hooi, Bryan and Hu, Dapeng and Liang, Jian and Feng, Jiashi},
  booktitle={Advances in Neural Information Processing Systems}, 
  year={2021}
}

8. Acknowledgements

This project is developed based on MoCo and SupContrast.