Home

Awesome

Improving Contrastive Learning by Visualizing Feature Transformation

This project hosts the codes, models and visualization tools for the paper:

Improving Contrastive Learning by Visualizing Feature Transformation,
Rui Zhu*, Bingchen Zhao*, Jingen Liu, Zhenglong Sun, Chang Wen Chen
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, Oral
arXiv preprint (arXiv 2108.02982)

@inproceedings{zhu2021improving,
  title={Improving Contrastive Learning by Visualizing Feature Transformation},
  author={Zhu, Rui and Zhao, Bingchen and Liu, Jingen and Sun, Zhenglong and Chen, Chang Wen},
  booktitle =  {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

highlights2

Highlights

highlights

Updates

Installation

This project is mainly based on the open-source code PyContrast.

Please refer to the INSTALL.md and RUN.md for installation and dataset preparation.

Models

For your convenience, we provide the following pre-trained models on ImageNet-1K and ImageNet-100.

pre-train methodpre-train datasetbackbone#epochImageNet-1KVOC det AP50COCO det APLink
SupervisedImageNet-1KResNet-50-76.181.338.2download
MoCo-v1ImageNet-1KResNet-5020060.681.538.5download
MoCo-v1+FTImageNet-1KResNet-5020061.982.039.0download
MoCo-v2ImageNet-1KResNet-5020067.582.439.0download
MoCo-v2+FTImageNet-1KResNet-5020069.683.339.5download
MoCo-v1+FTImageNet-100ResNet-50200IN-100 result 77.2--download

Note:

Usage

Training on IN-1K

python main_contrast.py --method MoCov2 --data_folder your/path/to/imagenet-1K/dataset  --dataset imagenet  --epochs 200 --input_res 224 --cosine --batch_size 256 --learning_rate 0.03   --mixnorm --mixnorm_target posneg --sep_alpha --pos_alpha 2.0 --neg_alpha 1.6 --mask_distribution beta --expolation_mask --alpha 0.999 --multiprocessing-distributed --world-size 1 --rank 0 --save_score --num_workers 8

Linear Evaluation on IN-1K

python main_linear.py --method MoCov2 --data_folder your/path/to/imagenet-1K/dataset --ckpt your/path/to/pretrain_model   --n_class 1000 --multiprocessing-distributed --world-size 1 --rank 0 --epochs 100 --lr_decay_epochs 60,80  --num_workers 8

Training on IN-100

python main_contrast.py --method MoCov2 --data_folder your/path/to/imagenet-1K/dataset  --dataset imagenet100  --imagenet100path your/path/to/imagenet100.class  --epochs 200 --input_res 224 --cosine --batch_size 256 --learning_rate 0.03   --mixnorm --mixnorm_target posneg --sep_alpha --pos_alpha 2.0 --neg_alpha 1.6 --mask_distribution beta --expolation_mask --alpha 0.99 --multiprocessing-distributed --world-size 1 --rank 0 --save_score  --num_workers 8

Linear Evaluation on IN-100

python main_linear.py --method MoCov2 --data_folder your/path/to/imagenet-1K/dataset  --dataset imagenet100  --imagenet100path your/path/to/imagenet100.class  --n_class 100  --ckpt your/path/to/pretrain_model  --multiprocessing-distributed --world-size 1 --rank 0  --num_workers 8

Transferring to Object Detection

Please refer to DenseCL and MoCo for transferring to object detection.

Visualization Tools

Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follow.

@inproceedings{zhu2021improving,
  title={Improving Contrastive Learning by Visualizing Feature Transformation},
  author={Zhu, Rui and Zhao, Bingchen and Liu, Jingen and Sun, Zhenglong and Chen, Chang Wen},
  booktitle =  {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}