Home

Awesome

Feature-Distillation

By Yixuan Wei*, Han Hu*, Zhenda Xie, Zheng Zhang, Yue Cao, Jianmin Bao, Dong Chen and Baining Guo.

This repo is the official implementation of "Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation".

Updates

11/30/2022

  1. Distilled and fine-tuned models on ImageNet-1K (ViT Large) are provided.

11/28/2022

Initial commits:

  1. Distilled and fine-tuned models on ImageNet-1K (Swin Base, and ViT Base) are provided.
  2. The supported code for ImageNet-1K distillation and fine-tuning is provided.

Introduction

FD is initially described in arxiv, which is a simple framework to convert the traditional pre-training models, such as image classification (DeiT), instance contrastive learning (DINO) and image-text alignment (CLIP) into new models with better fine-tuning performances. Through a set of diagosing tools, we find that the models distilled with feature map are endowed with following good properties which are also revealed in masked image modeling models: 1) more diverse attention heads; 2) more diagonal attention patterns; 3) flatten loss landscapes.

<div align="center"> <img src="figures/teaser.jpg" height="250px" /> </div>

Main Results on ImageNet

Swin Transformer

ImageNet-1K Distilled and Fine-tuned Models

namedistillation epochsteacher modelimage resolutionacc@1distilled modelfine-tuned model
Swin-Base300EsViT-Base224x22485.1google/configgoogle/config

Vision Transformer

ImageNet-1K Distilled and Fine-tuned Models

namedistillation epochsteacher modelimage resolutionacc@1distilled modelfine-tuned model
ViT-Base300CLIP-Base224x22484.9google/configgoogle/config
ViT-Base300DINO-Base224x22483.8google/configgoogle/config
ViT-Base300DeiT-Base224x22483.0google/configgoogle/config
ViT-Large300CLIP-Large224x22487.7google/configgoogle/config

Citation

If you find our work useful in your research, please cite:

@article{wei2022FD,
  title={Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation},
  author={Yixuan Wei and Han Hu and Zhenda Xie and Zheng Zhang and Yue Cao and Jianmin Bao and Dong Chen and Baining Guo},
  journal={Tech Report},
  year={2022}
}

Getting Started

Installation

# Create environment
conda create -n FD python=3.8 -y
conda activate FD

# Install requirements
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113

# Clone codes
git clone https://github.com/SwinTransformer/Feature-Distillation
cd Feature-Distillation

# Install other requirements
pip install -r requirements.txt

Feature-Distillation

To distill models, run:

python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> main_fd.py \ 
--cfg <config-file> --data-path <imagenet-path>/train [--batch-size <batch-size-per-gpu> --output <output-directory> --tag <job-tag>]

For example, to distill CLIP-Base for 300 epochs on one DGX-2 server, run:

python -m torch.distributed.launch --nproc_per_node=16 main_fd.py --cfg configs/pretrain/fd_pretrain__clip_vit_base__img224__300ep.yaml --batch-size 128 --data-path <imagenet-path>/train [--output <output-directory> --tag <job-tag>]

If you want to save gpu memory consumption, add --use-checkpoint.

Fine-tuning distilled models

To fine-tune distilled models, run:

python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> main_finetune.py \ 
--cfg <config-file> --data-path <imagenet-path> --pretrained <pretrained-ckpt> [--batch-size <batch-size-per-gpu> --output <output-directory> --tag <job-tag>]

For example, to fine-tune Distilled-CLIP-Base on one DGX-2 server, run:

python -m torch.distributed.launch --nproc_per_node 16 main_finetune.py \ 
--cfg configs/finetune/fd_finetune__clip_vit_base__img224__300ep.yaml --batch-size 128 --data-path <imagenet-path> --pretrained <pretrained-ckpt> [--output <output-directory> --tag <job-tag>]