Home

Awesome

TAM: Temporal Adaptive Module for Video Recognition [arXiv]

@inproceedings{liu2021tam,
  title={TAM: Temporal adaptive module for video recognition},
  author={Liu, Zhaoyang and Wang, Limin and Wu, Wayne and Qian, Chen and Lu, Tong},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={13708--13718},
  year={2021}
}

[NEW!] 2021/07/23 - Our paper has been accepted by ICCV2021. More pretrained models will be released soon for research purpose. Welcom to follow our work!

[NEW!] 2021/06/01 - Our temporal adaptive module has been integrated into MMAction2! We are glad to see our TAM achieved higher accuracy with MMaction2 in several datasets.

[NEW!] 2020/10/10 - We have released the code of TAM for research purpose.

Overview

We release the PyTorch code of the Temporal Adaptive Module.

<div align="center"> <img src="./visualization/full_arch.png" width = "600" alt="Architecture" align=center /> <br> <div style="color:orange; border-bottom: 2px solid #d9d9d9; display: inline-block; color: #999; padding: 10px;"> The overall architecture of TANet: ResNet-Block vs. TA-Block. </div> </div>

Content

Prerequisites

The code is built with following libraries:

Data Preparation

As following TSN and TSM repos, we provide a series of tools (vidtools) to extracte frames of video.

For convenience, the processing of video data can be summarized as follows:

Model ZOO

Here we provide some off-the-shelf pretrained models. The accuracy might vary a little bit compared to the paper, since the raw video of Kinetics downloaded by users may have some differences.

ModelsDatasetsResolutionFrames * Crops * ClipsTop-1Top-5Checkpoints
TAM-R50Kinetics-400256 * 2568 * 3 * 1076.1%92.3%ckpt
TAM-R50Kinetics-400256 * 25616 * 3 * 1076.9%92.9%ckpt
TAM-R50Sth-Sth v1224 * 2248 * 1 * 146.5%75.8%ckpt
TAM-R50Sth-Sth v1224 * 22416 * 1 * 147.6%77.7%ckpt
TAM-R50Sth-Sth v2256 * 2568 * 3 * 262.7%88.0%ckpt
TAM-R50Sth-Sth v2256 * 25616 * 3 * 264.6%89.5%ckpt

After downloading the checkpoints and putting them into the target path, you can test the TAM with these pretrained weights.

Testing

For example, to test the downloaded pretrained models on Kinetics, you can run scripts/test_tam_kinetics_rgb_8f.sh. The scripts will test TAM with 8-frame setting:

# test TAM on Kinetics-400
python -u test_models.py kinetics \
--weights=./checkpoints/kinetics_RGB_resnet50_tam_avg_segment8_e100_dense/ckpt.best.pth.tar \
--test_segments=8 --test_crops=3 \
--full_res --sample dense-10 --batch_size 8

We should notice that --sample can determine the sampling strategy in the testing. Specifically, --sample uniform-N denotes the model takes N clips uniformly sampled from video as inputs, and --sample dense-N denotes the model takes N clips densely sampled from video as inputs.

You also can test TAM on Something-Something V2 by running scripts/test_tam_somethingv2_rgb_8f.sh:

# test TAM on Something-Something V2
python -u test_models.py somethingv2 \
--weights=./checkpoints/something_RGB_resnet50_tam_avg_segment8_e50/ckpt.best.pth.tar \
--test_segments=8 --test_crops=3 \
--full_res --sample uniform-2 --batch_size 32

Training

We provided several scripts to train TAM in this repo: