Home

Awesome

This is an official pytorch implementation of EZ-CLIP: Efficient Zero-Shot Video Action Recognition [arXiv]

Updates

Overview

EZ-CLIP

Introduction

In this study, we present EZ-CLIP, a simple and efficient adaptation of CLIP that addresses these challenges. EZ-CLIP leverages temporal visual prompting for seamless temporal adaptation, requiring no fundamental alterations to the core CLIP architecture while preserving its remarkable generalization abilities. Moreover, we introduce a novel learning objective that guides the temporal visual prompts to focus on capturing motion, thereby enhancing its learning capabilities from video data.

Content

Prerequisites

We provide the conda requirements.txt to help you install these libraries. You can initialize environment by using pip install -r requirements.txt.

Model Zoo

NOTE: All models in our experiments below uses publicly available ViT/B-16 based CLIP model.

Zero-shot results

All models are trained on Kinetics-400 and then evaluated directly on downstream datasets.

ModelInputHMDB-51UCF-101Kinetics-600Model
EZ-CLIP(ViT-16)8x22452.979.170.1link

Base-to-novel generalization results

Here, we divide each dataset into base and novel classes. All models are trained on base classes and evaluated on both base and novel classes.

DatasetInputBase Acc.Novel Acc.HMModel
K-4008x22473.160.666.3link
HMDB-518x22477.058.266.3link
UCF-1018x22494.477.985.4link
SSV28x22416.613.314.8Link

Data Preparation

We need to first extract videos into frames for fast reading. Please refer 'Dataset_creation_scripts' data pre-processing. We have successfully trained on Kinetics, UCF101, HMDB51,

Training

# Train
python train.py --config configs/K-400/k400_train.yaml

Testing

# Test 
python test.py --config configs/ucf101/UCF_zero_shot_testing.yaml

Citation

If you find the code and pre-trained models useful for your research, please consider citing our paper:

@article{ez2022clip,
  title={EZ-CLIP: Efficient Zeroshot Video Action Recognition},
  author={Shahzad Ahmad, Sukalpa Chanda, Yogesh S Rawat},
  journal={arXiv preprint arXiv:2312.08010},
  year={2024}
}

Acknowledgments

Our code is based on ActionCLIP