Home

Awesome

<div align="center"> <h2 align="center"> <a href="https://arxiv.org/abs/2307.08908">【ICCV'2023】What Can Simple Arithmetic Operations Do for Temporal Modeling?</a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for latest update. </h2>

Conference Paper

Wenhao Wu<sup>1,2</sup>, Yuxin Song<sup>2</sup>, Zhun Sun<sup>2</sup>, Jingdong Wang<sup>3</sup>, Chang Xu<sup>1</sup>, Wanli Ouyang<sup>3,1</sup>

<sup>1</sup>The University of Sydney, <sup>2</sup>Baidu, <sup>3</sup>Shanghai AI Lab

</div>

PWC PWC PWC

This is the official implementation of our ATM (Arithmetic Temporal Module), which explores the potential of four simple arithmetic operations for temporal modeling.

Our best model can achieve 89.4% Top-1 Acc. on Kinetics-400, 65.6% Top-1 Acc. on Something-Something V1, 74.6% Top-1 Acc. on Something-Something V2!

<details open><summary>πŸ”₯ I also have other recent video recognition projects that may interest you ✨. </summary><p>

Side4Video: Spatial-Temporal Side Network for Memory-Efficient Image-to-Video Transfer Learning<br> Huanjin Yao, Wenhao Wu, Zhiheng Li<br> arXiv github

Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models<br> Wenhao Wu, Xiaohan Wang, Haipeng Luo, Jingdong Wang, Yi Yang, Wanli Ouyang <br> Conference github

Revisiting Classifier: Transferring Vision-Language Models for Video Recognition<br> Wenhao Wu, Zhun Sun, Wanli Ouyang <br> Conference Journal github

</p></details> <!-- ## Content - [Content](#content) - [πŸ“£ News](#-news) - [🌈 Overview](#-overview) - [πŸ“Œ BibTeX \& Citation](#-bibtex--citation) - [πŸŽ—οΈ Acknowledgement](#️-acknowledgement) - [πŸ‘« Contact](#-contact) -->

πŸ“£ News

<!-- - [ ] `TODO`: All models will be released. -->

🌈 Overview

ATM The key motivation behind ATM is to explore the potential of simple arithmetic operations to capture auxiliary temporal clues that may be embedded in current video features, without relying on the elaborate design. The ATM can be integrated into both vanilla CNN backbone (e.g., ResNet) and Vision Transformer (e.g., ViT) for video action recognition.

πŸš€ Training & Testing

We offer training and testing scripts for Kinetics-400, Sth-Sth V1, and Sth-Sth V2. Please refer to the script folder for details. For example, you can run:

# Train the 8 Frames ViT-B/32 model on Sth-Sth v1.
sh scripts/ssv1/train_base.sh 

# Test the 8 Frames ViT-B/32 model on Sth-Sth v1.
sh scripts/ssv1/test_base_f8.sh

<a name="bibtex"></a>

πŸ“Œ BibTeX & Citation

If you use our code in your research or wish to refer to the baseline results, please use the following BibTeX entry😁.

@inproceedings{atm,
  title={What Can Simple Arithmetic Operations Do for Temporal Modeling?},
  author={Wu, Wenhao and Song, Yuxin and Sun, Zhun and Wang, Jingdong and Xu, Chang and Ouyang, Wanli},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
  year={2023}
}

<a name="acknowledgment"></a>

πŸŽ—οΈ Acknowledgement

This repository is built upon portions of VideoMAE, CLIP, and EVA. Thanks to the contributors of these great codebases.

πŸ‘« Contact

For any question, please file an issue or contact Wenhao Wu.