Home

Awesome

Frame Flexible Network (CVPR2023)

<div align="left"> <a><img src="fig/smile.png" height="70px" ></a> <a><img src="fig/neu.png" height="70px" ></a> <a><img src="fig/uva.png" height="70px" ></a> </div>

arXiv | Primary contact: Yitian Zhang

@article{zhang2023frame,
  title={Frame Flexible Network},
  author={Zhang, Yitian and Bai, Yue and Liu, Chang and Wang, Huan and Li, Sheng and Fu, Yun},
  journal={arXiv preprint arXiv:2303.14817},
  year={2023}
}
<div align="center"> <img src="fig/deviation.png" width="750px" height="450px"> </div>

TL,DR

Datasets

Please follow the instruction of TSM to prepare the Something-Something V1/V2, Kinetics400, HMDB51 datasets.

Support Models

FFN is a general framework and can be easily applied to existing methods for stronger performance and higher flexibility during inference. Currently, FFN supports the implementation of 2D Network: TSM, TEA; 3D Network: SlowFast; Transformer Network: Uniformer. Please feel free to contact us if you want to contribute the implementation of other methods.

Result

<div align="center"> <img src="fig/architecture.png" width="800px" height="211px"> </div>

FFN can obviously outperform Separated Training (ST) at all frames on different architectures with significant less parameters on Something-Something V1 dataset.

Here we provide the pretrained models on all these architectures:

ModelAcc1.($v_{L}$)Acc1.($v_{M}$)Acc1.($v_{H}$)Weight
TSM20.60%37.36%48.55%link
TSM-ST39.71%45.63%48.55%-
TSM-FFN42.85%48.20%50.79%link
TEA21.78%41.49%51.23%link
TEA-ST41.36%48.37%51.23%-
TEA-FFN44.97%51.61%54.04%link
SlowFast15.08%35.08%45.88%link
SlowFast-ST39.91%44.12%45.88%-
SlowFast-FFN43.90%47.11%47.27%link
Uniformer22.38%47.98%56.71%link
Uniformer-ST44.33%51.49%56.71%-
Uniformer-FFN51.41%56.64%58.88%link
<div align="center"> <img src="fig/dataset.png" width="800px" height="222px"> </div>

FFN can obviously outperform Separated Training (ST) at all frames on different datasets with significant less parameters.

Here we provide the pretrained models on Something-Something V2:

ModelParametersAcc1.($v_{L}$)Acc1.($v_{M}$)Acc1.($v_{H}$)Weight
TSM25.6M31.52%51.55%61.02%link
TSM-ST25.6x3M53.38%59.29%61.02%-
TSM-FFN25.7M56.07%61.86%63.61%link

and Kinetics400:

ModelParametersAcc1.($v_{L}$)Acc1.($v_{M}$)Acc1.($v_{H}$)Weight
TSM25.6M64.10%69.77%73.16%link
TSM-ST25.6x3M66.25%70.38%73.16%-
TSM-FFN25.7M68.96%72.33%74.35%link
<div align="center"> <img src="fig/any_frame.png" width="450px" height="323px"> </div>

FFN can be evaluated at any frame and outperform Seperated Training (ST) even at frames which are not used in training.

Get Started

We provide a comprehensive codebase for video recognition which contains the implementation of 2D Network, 3D Network and Transformer Network. Please go to the folders for specific docs.

Acknowledgment

Our codebase is heavily build upon TSM, SlowFast and Uniformer. We gratefully thank the authors for their wonderful works. The README file format is heavily based on the GitHub repos of my colleague Huan Wang, Xu Ma and Yizhou Wang. Great thanks to them! We also greatly thank the anounymous CVPR'23 reviewers for the constructive comments to help us improve the paper.