Home

Awesome

ActionFormer: Localizing Moments of Actions with Transformers

Introduction

This code repo implements Actionformer, one of the first Transformer-based model for temporal action localization --- detecting the onsets and offsets of action instances and recognizing their action categories. Without bells and whistles, ActionFormer achieves 71.0% mAP at tIoU=0.5 on THUMOS14, outperforming the best prior model by 14.1 absolute percentage points and crossing the 60% mAP for the first time. Further, ActionFormer demonstrates strong results on ActivityNet 1.3 (36.56% average mAP) and the more challenging EPIC-Kitchens 100 (+13.5% average mAP over prior works). Our paper is accepted to ECCV 2022 and an arXiv version can be found at this link.

In addition, ActionFormer is the backbone for many winning solutions in the Ego4D Moment Queries Challenge 2022. Our submission in particular is ranked 2nd with a record 21.76% average mAP and 42.54% Recall@1x, tIoU=0.5, nearly three times higher than the official baseline. An arXiv version of our tech report can be found at this link. We invite our audience to try out the code.

<div align="center"> <img src="teaser.jpg" width="600px"/> </div>

Specifically, we adopt a minimalist design and develop a Transformer based model for temporal action localization, inspired by the recent success of Transformers in NLP and vision. Our method, illustrated in the figure, adapts local self-attention to model temporal context in untrimmed videos, classifies every moment in an input video, and regresses their corresponding action boundaries. The result is a deep model that is trained using standard classification and regression loss, and can localize moments of actions in a single shot, without using action proposals or pre-defined anchor windows.

Related projects:

SnAG: Scalable and Accurate Video Grounding <br> Fangzhou Mu*, Sicheng Mo*, Yin Li <br> CVPR 2024 <br> github github arXiv <br>

Changelog

Code Overview

The structure of this code repo is heavily inspired by Detectron2. Some of the main components are

Installation

Frequently Asked Questions

To Reproduce Our Results on THUMOS14

Download Features and Annotations

Details: The features are extracted from two-stream I3D models pretrained on Kinetics using clips of 16 frames at the video frame rate (~30 fps) and a stride of 4 frames. This gives one feature vector per 4/30 ~= 0.1333 seconds.

Unpack Features and Annotations

This folder
│   README.md
│   ...  
│
└───data/
│    └───thumos/
│    │	 └───annotations
│    │	 └───i3d_features   
│    └───...
|
└───libs
│
│   ...

Training and Evaluation

python ./train.py ./configs/thumos_i3d.yaml --output reproduce
tensorboard --logdir=./ckpt/thumos_i3d_reproduce/logs
python ./eval.py ./configs/thumos_i3d.yaml ./ckpt/thumos_i3d_reproduce

[Optional] Evaluating Our Pre-trained Model

We also provide a pre-trained model for THUMOS 14. The model with all training logs can be downloaded from this Google Drive link. To evaluate the pre-trained model, please follow the steps listed below.

This folder
│   README.md
│   ...  
│
└───pretrained/
│    └───thumos_i3d_reproduce/
│    │	 └───thumos_reproduce_log.txt
│    │	 └───thumos_reproduce_results.txt
│    │   └───...    
│    └───...
|
└───libs
│
│   ...
python ./eval.py ./configs/thumos_i3d.yaml ./pretrained/thumos_i3d_reproduce/
Method0.30.40.50.60.7Avg
ActionFormer82.1377.8070.9559.4043.8766.83

To Reproduce Our Results on ActivityNet 1.3

Download Features and Annotations

Details: The features are extracted from the R(2+1)D-34 model pretrained with TSP on ActivityNet using clips of 16 frames at a frame rate of 15 fps and a stride of 16 frames (i.e., non-overlapping clips). This gives one feature vector per 16/15 ~= 1.067 seconds. The features are converted into numpy files for our code.

Unpack Features and Annotations

This folder
│   README.md
│   ...  
│
└───data/
│    └───anet_1.3/
│    │	 └───annotations
│    │	 └───tsp_features   
│    └───...
|
└───libs
│
│   ...

Training and Evaluation

python ./train.py ./configs/anet_tsp.yaml --output reproduce
tensorboard --logdir=./ckpt/anet_tsp_reproduce/logs
python ./eval.py ./configs/anet_tsp.yaml ./ckpt/anet_tsp_reproduce

[Optional] Evaluating Our Pre-trained Model

We also provide a pre-trained model for ActivityNet 1.3. The model with all training logs can be downloaded from this Google Drive link. To evaluate the pre-trained model, please follow the steps listed below.

This folder
│   README.md
│   ...  
│
└───pretrained/
│    └───anet_tsp_reproduce/
│    │	 └───anet_tsp_reproduce_log.txt
│    │	 └───anet_tsp_reproduce_results.txt
│    │   └───...    
│    └───...
|
└───libs
│
│   ...
python ./eval.py ./configs/anet_tsp.yaml ./pretrained/anet_tsp_reproduce/
Method0.50.750.95Avg
ActionFormer54.6737.818.3636.56

[Optional] Reproducing Our Results with I3D Features

Details: The features are extracted from the I3D model pretrained on Kinetics using clips of 16 frames at a frame rate of 25 fps and a stride of 16 frames. This gives one feature vector per 16/25 = 0.64 seconds. The features are converted into numpy files for our code.

python ./train.py ./configs/anet_i3d.yaml --output reproduce
python ./eval.py ./configs/anet_i3d.yaml ./ckpt/anet_i3d_reproduce
python ./eval.py ./configs/anet_i3d.yaml ./pretrained/anet_i3d_reproduce/
Method0.50.750.95Avg
ActionFormer54.2936.718.2436.03

To Reproduce Our Results on EPIC Kitchens 100

Download Features and Annotations

Details: The features are extracted from the SlowFast model pretrained on the training set of EPIC Kitchens 100 (action classification) using clips of 32 frames at a frame rate of 30 fps and a stride of 16 frames. This gives one feature vector per 16/30 ~= 0.5333 seconds.

Unpack Features and Annotations

This folder
│   README.md
│   ...  
│
└───data/
│    └───epic_kitchens/
│    │	 └───annotations
│    │	 └───features   
│    └───...
|
└───libs
│
│   ...

Training and Evaluation

python ./train.py ./configs/epic_slowfast_verb.yaml --output reproduce
python ./train.py ./configs/epic_slowfast_noun.yaml --output reproduce
python ./eval.py ./configs/epic_slowfast_verb.yaml ./ckpt/epic_slowfast_verb_reproduce
python ./eval.py ./configs/epic_slowfast_noun.yaml ./ckpt/epic_slowfast_noun_reproduce

[Optional] Evaluating Our Pre-trained Model

We also provide a pre-trained model for EPIC-Kitchens 100. The model with all training logs can be downloaded from this Google Drive link (verb), and from this Google Drive link (noun). To evaluate the pre-trained model, please follow the steps listed below.

This folder
│   README.md
│   ...  
│
└───pretrained/
│    └───epic_slowfast_verb_reproduce/
│    │	 └───epic_slowfast_verb_reproduce_log.txt
│    │	 └───epic_slowfast_verb_reproduce_results.txt
│    │   └───...   
│    └───epic_slowfast_noun_reproduce/
│    │	 └───epic_slowfast_noun_reproduce_log.txt
│    │	 └───epic_slowfast_noun_reproduce_results.txt
│    │   └───...  
│    └───...
|
└───libs
│
│   ...
python ./eval.py ./configs/epic_slowfast_verb.yaml ./pretrained/epic_slowfast_verb_reproduce/
python ./eval.py ./configs/epic_slowfast_noun.yaml ./pretrained/epic_slowfast_noun_reproduce/
Method0.10.20.30.40.5Avg
ActionFormer (verb)26.5825.4224.1522.2919.0923.51
ActionFormer (noun)25.2124.1122.6620.4716.9721.88

To Reproduce Our Results on Ego4D Moment Queries Benchmark

Download Features and Annotations

Details: All features are extracted at 1.875 fps from videos at 30 fps. This gives one feature vector per ~0.5333 seconds. Please refer to Ego4D and EgoVLP's documentation for more details on feature extraction.

Unpack Features and Annotations

This folder
│   README.md
│   ...  
│
└───data/
│    └───ego4d/
│    │   └───annotations
│    │   └───slowfast_features
│    │   └───omnivore_features
│    │   └───egovlp_features  
│    └───...
|
└───libs
│
│   ...

Training and Evaluation

python ./train.py ./configs/ego4d_omnivore_egovlp.yaml --output reproduce
tensorboard --logdir=./ckpt/ego4d_omnivore_egovlp_reproduce/logs
python ./eval.py ./configs/ego4d_omnivore_egovlp.yaml ./ckpt/ego4d_omnivore_egovlp_reproduce

[Optional] Evaluating Our Pre-trained Model

We also provide pre-trained models for Ego4D trained with all feature combinations. The models with all training logs can be downloaded from this Google Drive link. To evaluate the pre-trained model, please follow the steps listed below.

This folder
│   README.md
│   ...  
│
└───pretrained/
│    └───ego4d_omnivore_egovlp_reproduce/
│    │   └───ego4d_omnivore_egovlp_reproduce_log.txt
│    │   └───ego4d_omnivore_egovlp_reproduce_results.txt
│    │   └───...   
│    └───...
|
└───libs
│
│   ...
python ./eval.py ./configs/ego4d_omnivore_egovlp.yaml ./pretrained/ego4d_omnivore_egovlp_reproduce/
Method0.10.20.30.40.5Avg
ActionFormer (S)20.0917.4514.4412.4610.0014.89
ActionFormer (O)23.8720.7818.3915.3312.6518.20
ActionFormer (E)26.8423.8620.5717.1914.5420.60
ActionFormer (S+E)27.9824.4621.2118.5615.6021.56
ActionFormer (O+E)27.9924.9421.9419.0515.9821.98
ActionFormer (S+O+E)28.2624.6921.8819.3516.2822.09
Method0.10.20.30.40.5Avg
ActionFormer (S)52.2545.8440.6036.5831.3341.32
ActionFormer (O)54.6348.7243.0337.7633.5743.54
ActionFormer (E)59.5354.3948.9742.7537.1248.55
ActionFormer (S+E)59.9653.7548.7644.0038.9649.09
ActionFormer (O+E)61.0354.1549.7945.1739.8849.99
ActionFormer (S+O+E)60.8554.1649.6045.1239.8749.92

Training and Evaluating Your Own Dataset

Work in progress. Stay tuned.

Contact

Yin Li (yin.li@wisc.edu)

References

If you are using our code, please consider citing our paper.

@inproceedings{zhang2022actionformer,
  title={ActionFormer: Localizing Moments of Actions with Transformers},
  author={Zhang, Chen-Lin and Wu, Jianxin and Li, Yin},
  booktitle={European Conference on Computer Vision},
  series={LNCS},
  volume={13664},
  pages={492-510},
  year={2022}
}

If you cite our results on Ego4D, please consider citing our tech report in addition to the main paper.

@article{mu2022actionformerego4d,
  title={Where a Strong Backbone Meets Strong Features -- ActionFormer for Ego4D Moment Queries Challenge},
  author={Mu, Fangzhou and Mo, Sicheng and Wang, Gillian, and Li, Yin},
  journal={arXiv e-prints},
  year={2022}
}

If you are using TSP features, please cite

@inproceedings{alwassel2021tsp,
  title={{TSP}: Temporally-sensitive pretraining of video encoders for localization tasks},
  author={Alwassel, Humam and Giancola, Silvio and Ghanem, Bernard},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops},
  pages={3173--3183},
  year={2021}
}