Home

Awesome

Fine-tuned CLIP models are efficient video learners [CVPR 2023]

Fine-tuned CLIP models are efficient video learners<br> Hanoona Rasheed*, Muhammad Uzair Khattak*, Muhammad Maaz, Salman Khan, Fahad Shahbaz Khan

*Equally contributing first authors

Website paper video slides Jupyter Notebook

Official implementation of the paper "Fine-tuned CLIP models are efficient video learners".

<hr />

:rocket: News

<hr />

Highlights

main figure

<p align="justify"> This work explores the capability of a simple baseline called ViFi-CLIP (Video Fine-tuned CLIP) for adapting image pretrained CLIP to video domain. The figure compares the zero-shot performance of vanilla CLIP and several of its variants adapted for videos (trained on Kinetics-400, evaluated on UCF-101 and HMDB-51). The t-SNE visualizations of video-embeddings obtained from ViFi-CLIP (4th col.) are compared with embeddings from vanilla CLIP (1st col.), individually tuned CLIP text (2nd col.) and image encoder (3rd col.) on videos, and recent state-of-the-art work, XCLIP (last col.) (∆ represents difference over XCLIP). The embeddings of ViFi-CLIP are better separable, indicating that a simple fine-tuning of CLIP is sufficient to learn suitable video-specific inductive biases, and can perform competitive to more complex approaches having dedicated components designed to model temporal information in videos. </p>

<p align="justify"> Abstract: Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model. Since training on a similar scale for videos is infeasible, recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this pursuit, new parametric modules are added to learn temporal information and inter-frame relationships which require meticulous design efforts. Furthermore, when the resulting models are learned on videos , they tend to overfit on the given task distribution and lack in generalization aspect. This begs the following question: How to effectively transfer image-level CLIP representations to videos? In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the frame-level processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where full fine-tuning is not viable, we propose a `bridge and prompt' approach that first uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot, base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks. </p>

Main Contributions

  1. ViFi-CLIP: We formulate and show the significance of an often neglected but simple baseline for transferring image-based CLIP model to video domain. ViFi-CLIP (Video Fine-tuned CLIP) shows that a simple fine-tuning of CLIP is sufficient to learn suitable video-specific inductive biases, and can perform competitive to more complex approaches having dedicated components designed to model temporal information in videos.

  2. Base-to-novel generalization benchmark: We introduce base-to-novel generalization benchmark for video-domain for evaluating the generalization ability of models for video action recognition.

  3. Bridge and Prompt approach: We show the effectiveness of our proposed ‘bridge and prompt’ approach to first bridge the modality gap through fine-tuning followed by prompt learning in both visual and language branches of the CLIP model for low-data regimes.

Model Zoo

NOTE: All models in our experiments below uses publicly available ViT/B-16 based CLIP model. The trained model weights against each experiment is provided in tables below.

Zero-shot results

All models are trained on Kinetics-400 and then evaluated directly on downstream datasets.

Name (configs)InputHMDB-51UCF-101Kinetics-600Model
CLIP image-FT32x22449.072.962.2link
CLIP text-FT32x22448.569.868.5link
ViFi-CLIP32x22451.376.871.2link

Base-to-novel generalization results

Here, we divide each dataset into base and novel classes. All models are trained on base classes and evaluated on both base and novel classes. Results are averaged over 3 seeds for each experiment.

Kinetics-400

Name (configs)InputBase Acc.Novel Acc.HMModel
CLIP image-FT32x22472.958.064.6seed1/seed2/seed3
CLIP text-FT32x22473.459.765.8seed1/seed2/seed3
ViFi-CLIP32x22476.461.167.9seed1/seed2/seed3

HMDB-51

Name (configs)InputBase Acc.Novel Acc.HMModel
CLIP image-FT32x22462.647.554.0seed1/seed2/seed3
CLIP text-FT32x22470.051.259.1seed1/seed2/seed3
ViFi-CLIP32x22473.853.361.9seed1/seed2/seed3

UCF-101

Name (configs)InputBase Acc.Novel Acc.HMModel
CLIP image-FT32x22486.465.374.4seed1/seed2/seed3
CLIP text-FT32x22490.967.477.4seed1/seed2/seed3
ViFi-CLIP32x22492.967.778.3seed1/seed2/seed3

SSv2

Name (configs)InputBase Acc.Novel Acc.HMModel
CLIP image-FT32x2249.28.58.8seed1/seed2/seed3
CLIP text-FT32x22412.49.510.8seed1/seed2/seed3
ViFi-CLIP32x22416.212.113.9seed1/seed2/seed3

VL Prompting approach: Base-to-Novel

ViFi-CLIP is first trained on K400 and then vision and language prompts are further fine-tuned on the downstream datasets.

Dataset (configs)InputBase Acc.Novel Acc.HMModel
HMDB-5132x22477.154.964.1seed1/seed2/seed3
UCF-10132x22495.974.183.6seed1/seed2/seed3
SSv232x22415.811.513.3seed1/seed2/seed3

Few-shot results

Below table shows few-shot results of ViFi-CLIP for K=2, 4, 8 and 16.

Name (configs)DatasetK (shots)InputTop-1 Acc.Model
ViFi-CLIPHMDB-51232x22457.2link
ViFi-CLIPHMDB-51432x22462.7link
ViFi-CLIPHMDB-51832x22464.5link
ViFi-CLIPHMDB-511632x22466.8link
ViFi-CLIPUCF-101232x22480.7link
ViFi-CLIPUCF-101432x22485.1link
ViFi-CLIPUCF-101832x22490.0link
ViFi-CLIPUCF-1011632x22492.7link
ViFi-CLIPSSv2232x2246.2link
ViFi-CLIPSSv2432x2247.4link
ViFi-CLIPSSv2832x2248.5link
ViFi-CLIPSSv21632x22412.4link

NOTE: Few-shot results for other CLIP Fine-tuned variants are presented in our main paper (Table 3). Model weights for other variants are provided here.

VL Prompting approach: Few-shot

ViFi-CLIP is first trained on K400 and then vision and language prompts are further fine-tuned on the downstream datasets in few-shot manner.

Dataset (configs)InputK=2K=4K=8K=16Model
HMDB-5132x22463.065.169.672.0K=2/K=4/K=8/K=16
UCF-10132x22491.093.795.096.4K=2/K=4/K=8/K=16
SSv232x2246.77.910.213.5K=2/K=4/K=8/K=16

Fully-supervised results on Kinetics-400

Name (configs)FLOPS(G)InputTop-1 Acc.Top-5 Acc.Model
CLIP image-FT28116x22482.896.2link
CLIP text-FT28116x22473.191.2link
ViFi-CLIP28116x22483.996.3link

Installation

For installation and other package requirements, please follow the instructions detailed in INSTALL.md.

Data preparation

Please follow the instructions at DATASETS.md to prepare all datasets.

Training

For all experiments shown in above tables, we provide config files in configs folder. For example, to train ViFi-CLIP (tunes both image and text encoder) on Kinetics-400, run the following command:

python -m torch.distributed.launch --nproc_per_node=8 \ 
main.py -cfg configs/fully_supervised/k400/16_16_vifi_clip.yaml --output /PATH/TO/OUTPUT 

Note:

For detailed training instructions for all experimental setup, please refer to TRAIN.md.

Evaluating models

To evaluate a model, please use a suitable config and corresponding model weights. For example, to evaluate ViFi-CLIP with 16 frames on Kinetics-400, run the command below:

python -m torch.distributed.launch --nproc_per_node=8 main.py \
-cfg configs/fully_supervised/k400/16_16_vifi_clip.yaml --output /PATH/TO/OUTPUT \
--only_test --resume /PATH/TO/CKPT --opts TEST.NUM_CLIP 4 TEST.NUM_CROP 3

Contact

If you have any questions, please create an issue on this repository or contact at uzair.khattak@mbzuai.ac.ae or hanoona.bangalath@mbzuai.ac.ae .

Citation

If you use our approach (code, model or dataset splits) in your research, please consider citing:

@inproceedings{hanoonavificlip,
    title={Finetuned CLIP models are efficient video learners},
    author={Rasheed, Hanoona and khattak, Muhammad Uzair and Maaz, Muhammad and Khan, Salman and Khan, Fahad Shahbaz},
    booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2023}
}

Acknowledgements

Our code is based on XCLIP's repository. We sincerely thank the authors for releasing their code. If you use our model and code, please consider citing XCLIP as well.