Home

Awesome

TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding

TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding<br> Shuhuai Ren, Sishuo Chen, Shicheng Li, Xu Sun, Lu Hou

paper

PWC PWC PWC PWC PWC

:rocket: News

<hr />

Highlights

TESTA Visualization

Main Contributions

  1. We introduce an efficient method named TESTA (TEmporal-Spatial Token Aggregation) for long-form video understanding. TESTA progressively aggregates similar visual tokens during video encoding, which can reduce the number of visual tokens by 75% and thus accelerate video encoding.
  2. Building upon TESTA, we introduce a pre-trained video-language model equipped with a divided space-time token aggregation module in each video encoder block.
  3. Experimental results on five datasets for paragraph-to-video retrieval and long-form VideoQA tasks show that, TESTA improves computing efficiency by 1.7 times, and achieves significant performance gains from its scalability in processing longer input frames, e.g., +13.7 R@1 on QuerYD and +6.5 R@1 on Condensed Movie.

TESTA Arch

Currently, the repository contains the code for pre-training a general-purpose video-language model and fine-tuning it on downstream video understanding tasks including video-paragraph retrieval and VideoQA.

Installation

To install the dependencies, run

# create 
conda env create -f environment.yml
# activate
conda activate testa

Data preparation

Please follow the instructions at DATASETS.md to prepare all datasets.

Models

Pre-trained model

zero-shot performance on paragraph-to-video retrieval:

ModelframesQuerYD R@1DiDeMo R@1ActivityNet Caption R@1GFLOPsCheckpoint
TESTA-base (ViT-B/16)3264.464.937.1786testa_model_base_pretrain.pth

Fine-tuned model

QuerYD paragraph-to-video retrieval

ModelframesR@1R@5R@10GFLOPsCheckpoint
TESTA-base (ViT-B/16)3277.090.892.6420testa_model_base_queryd_f32_f1p12.pth

ActivityNet paragraph-to-video retrieval

ModelframesR@1R@5R@10GFLOPsCheckpoint
TESTA-base (ViT-B/16)3251.679.188.3420testa_model_base_anet_f32_f1p12.pth

DiDeMo paragraph-to-video retrieval

ModelframesR@1R@5R@10GFLOPsCheckpoint
TESTA-base (ViT-B/16)3257.783.389.4420testa_model_base_didemo_f32_f1p12.pth

CondensedMovie paragraph-to-video retrieval

ModelframesR@1R@5R@10GFLOPsCheckpoint
TESTA-base (ViT-B/16)3221.542.450.7420testa_model_base_cm_f32_f1p12.pth

Training and Evaluation

Please refer to the RUN.md for detailed instructions on training, evaluating and reproducing the results.

Todo list

Contact

If you have any questions, please feel free to create an issue on this repository.

Citation

If you find this code useful for your research, please consider citing:

@article{Ren2023TESTA,
  title={TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding},
  author={Shuhuai Ren and Sishuo Chen and Shicheng Li and Xu Sun and Lu Hou},
  journal={ArXiv},
  year={2023},
  volume={abs/2310.19060},
}

Acknowledgement

The codebase relies on resources from BLIP, ToMe,and TimeSFormer. We thank the original authors for their open-sourcing.