Awesome
<div align="center"> <h2 align="center"> <a href="https://arxiv.org/abs/2307.08908">γICCV'2023γWhat Can Simple Arithmetic Operations Do for Temporal Modeling?</a></h2> <h5 align="center"> If you like our project, please give us a star β on GitHub for latest update. </h2>Wenhao Wu<sup>1,2</sup>, Yuxin Song<sup>2</sup>, Zhun Sun<sup>2</sup>, Jingdong Wang<sup>3</sup>, Chang Xu<sup>1</sup>, Wanli Ouyang<sup>3,1</sup>
<sup>1</sup>The University of Sydney, <sup>2</sup>Baidu, <sup>3</sup>Shanghai AI Lab
</div>This is the official implementation of our ATM (Arithmetic Temporal Module), which explores the potential of four simple arithmetic operations for temporal modeling.
Our best model can achieve 89.4% Top-1 Acc. on Kinetics-400, 65.6% Top-1 Acc. on Something-Something V1, 74.6% Top-1 Acc. on Something-Something V2!
<details open><summary>π₯ I also have other recent video recognition projects that may interest you β¨. </summary><p>Side4Video: Spatial-Temporal Side Network for Memory-Efficient Image-to-Video Transfer Learning<br> Huanjin Yao, Wenhao Wu, Zhiheng Li<br>
Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models<br> Wenhao Wu, Xiaohan Wang, Haipeng Luo, Jingdong Wang, Yi Yang, Wanli Ouyang <br>
</p></details> <!-- ## Content - [Content](#content) - [π£ News](#-news) - [π Overview](#-overview) - [π BibTeX \& Citation](#-bibtex--citation) - [ποΈ Acknowledgement](#οΈ-acknowledgement) - [π« Contact](#-contact) -->Revisiting Classifier: Transferring Vision-Language Models for Video Recognition<br> Wenhao Wu, Zhun Sun, Wanli Ouyang <br>
π£ News
<!-- - [ ] `TODO`: All models will be released. -->-
Nov 29, 2023
: Training codes have be released. -
July 14, 2023
: πOur ATM has been accepted by ICCV-2023.
π Overview
The key motivation behind ATM is to explore the potential of simple arithmetic operations to capture auxiliary temporal clues that may be embedded in current video features, without relying on the elaborate design. The ATM can be integrated into both vanilla CNN backbone (e.g., ResNet) and Vision Transformer (e.g., ViT) for video action recognition.
π Training & Testing
We offer training and testing scripts for Kinetics-400, Sth-Sth V1, and Sth-Sth V2. Please refer to the script folder for details. For example, you can run:
# Train the 8 Frames ViT-B/32 model on Sth-Sth v1.
sh scripts/ssv1/train_base.sh
# Test the 8 Frames ViT-B/32 model on Sth-Sth v1.
sh scripts/ssv1/test_base_f8.sh
<a name="bibtex"></a>
π BibTeX & Citation
If you use our code in your research or wish to refer to the baseline results, please use the following BibTeX entryπ.
@inproceedings{atm,
title={What Can Simple Arithmetic Operations Do for Temporal Modeling?},
author={Wu, Wenhao and Song, Yuxin and Sun, Zhun and Wang, Jingdong and Xu, Chang and Ouyang, Wanli},
booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
year={2023}
}
<a name="acknowledgment"></a>
ποΈ Acknowledgement
This repository is built upon portions of VideoMAE, CLIP, and EVA. Thanks to the contributors of these great codebases.
π« Contact
For any question, please file an issue or contact Wenhao Wu.