Home

Awesome

CARE (NeurIPS 2021)

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning <br> Chongjian GE, Youwei Liang, Yibing Song, Jianbo Jiao, Jue Wang, and Ping Luo <br>

Graph

Updates

Comming

Requirements

To install requirements:

conda create -n care python=3.6
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
pip install tensorboardX
pip install ipdb
pip install einops
pip install loguru
pip install pyarrow==3.0.0
pip install imdb
pip install tqdm

📋 Pytorch>=1.6 is needed for runing the code.

Data Preparation

Prepare the ImageNet data in {data_path}/train.lmdb and {data_path}/val.lmdb

Relpace the original data path in care/data/dataset_lmdb (Line7 and Line40) with your new {data_path}.

📋 Note that we use the lmdb file to speed-up the data-processing procedure.

📋 We also provide code to load data with the single image(.jpg) format.

Training

Before training the ResNet-50 (100 epoch) in the paper, run this command first to add your PYTHONPATH:

export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/

Then run the training code via:

bash run_train.sh      #(The training script is used for trianing CARE with 8 gpus)
bash single_gpu_train.sh    #(We also provide the script for trainig CARE with only one gpu)

📋 The training script is used to do unsupervised pre-training of a ResNet-50 model on ImageNet in an 8-gpu machine

  1. using -b to specify batch_size, e.g., -b 128
  2. using -d to specify gpu_id for training, e.g., -d 0-7
  3. using --log_path to specify the main folder for saving experimental results.
  4. using --experiment-name to specify the folder for saving training outputs.

The codebase also supports for training other backbones (e.g., ResNet101 and ResNet152) with different training schedules (e.g., 200, 400 and 800 epochs).

Evaluation

Before start the evaluation, run this command first to add your PYTHONPATH:

export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/

Then, to evaluate the pre-trained model (e.g., ResNet50-100epoch) on ImageNet, run:

bash run_val.sh      #(The training script is used for evaluating CARE with 8 gpus)
bash debug_val.sh    #(We also provide the script for evaluating CARE with only one gpu)

📋 The training script is used to do the supervised linear evaluation of a ResNet-50 model on ImageNet in an 8-gpu machine

  1. using -b to specify batch_size, e.g., -b 128
  2. using -d to specify gpu_id for training, e.g., -d 0-7
  3. Modifying --log_path according to your own config.
  4. Modifying --experiment-name according to your own config.

Pre-trained Models

We here provide some pre-trained models in the [shared folder]:

Here are some examples.

More models are provided in the following model zoo part.

📋 We will provide more pretrained models in the future.

Model Zoo

Our model achieves the following performance on :

Self-supervised learning on image classifications.

MethodBackboneepochTop-1Top-5pretrained modellinear evaluation model
CAREResNet5010072.02%90.02%pretrainedlinear_model
CAREResNet5020073.78%91.50%[pretrained] (wip)[linear_model] (wip)
CAREResNet5040074.68%91.97%[pretrained] (wip)[linear_model] (wip)
CAREResNet5080075.56%92.32%[pretrained] (wip)[linear_model] (wip)
CAREResNet50(2x)10073.51%91.66%[pretrained] (wip)[linear_model] (wip)
CAREResNet50(2x)20075.00%92.22%[pretrained] (wip)[linear_model] (wip)
CAREResNet50(2x)40076.48%92.99%[pretrained] (wip)[linear_model] (wip)
CAREResNet50(2x)80077.04%93.22%[pretrained] (wip)[linear_model] (wip)
CAREResNet10110073.54%91.63%[pretrained] (wip)[linear_model] (wip)
CAREResNet10120075.89%92.70%[pretrained] (wip)[linear_model] (wip)
CAREResNet10140076.85%93.31%[pretrained] (wip)[linear_model] (wip)
CAREResNet10180077.23%93.52%[pretrained] (wip)[linear_model] (wip)
CAREResNet15210074.59%92.09%[pretrained] (wip)[linear_model] (wip)
CAREResNet15220076.58%93.63%[pretrained] (wip)[linear_model] (wip)
CAREResNet15240077.40%93.63%[pretrained] (wip)[linear_model] (wip)
CAREResNet15280078.11%93.81%[pretrained] (wip)[linear_model] (wip)

Transfer learning to object detection and semantic segmentation.

COCO det

MethodBackboneepochAP_bbAP_50AP_75pretrained modeldet/seg model
CAREResNet5020039.459.242.6[pretrained] (wip)[model] (wip)
CAREResNet5040039.659.442.9[pretrained] (wip)[model] (wip)
CAREResNet50-FPN20039.560.243.1[pretrained] (wip)[model] (wip)
CAREResNet50-FPN40039.860.543.5[pretrained] (wip)[model] (wip)

COCO instance seg

MethodBackboneepochAP_mkAP_50AP_75pretrained modeldet/seg model
CAREResNet5020034.656.136.8[pretrained] (wip)[model] (wip)
CAREResNet5040034.756.136.9[pretrained] (wip)[model] (wip)
CAREResNet50-FPN20035.957.238.5[pretrained] (wip)[model] (wip)
CAREResNet50-FPN40036.257.438.8[pretrained] (wip)[model] (wip)

VOC07+12 det

MethodBackboneepochAP_bbAP_50AP_75pretrained modeldet/seg model
CAREResNet5020057.783.064.5[pretrained] (wip)[model] (wip)
CAREResNet5040057.983.064.7[pretrained] (wip)[model] (wip)

📋 More results are provided in the paper.

Acknowledgements

We especially thank the contributors of the momentum2-teacher codebase for providing helpful code. For questions regarding CARE, feel free to post in this repository or directly contact the author (rhettgee@connect.hku.hk).

Citation

If you find our work useful in your research please consider citing our paper:

@InProceedings{ge2021revitalizing,
  author       = {Chongjian Ge and Youwei Liang and Yibing Song and Jianbo Jiao and Jue Wang and Ping Luo},
  title        = {Revitalizing CNN Attentions via Transformers in Self-Supervised Visual Representation Learning},
  booktitle    = {Advances in Neural Information Processing Systems},
  year         = {2021},
}