Awesome
Graph Convolutional Networks for Temporal Action Localization
This repo holds the codes and models for the PGCN framework presented on ICCV 2019
Graph Convolutional Networks for Temporal Action Localization Runhao Zeng*, Wenbing Huang*, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, Chuang Gan, ICCV 2019, Seoul, Korea.
Updates
20/12/2019 We have uploaded the RGB features, trained models and evaluation results! We found that increasing the number of proposals to 800 in the testing further boosts the performance on THUMOS14. We have also updated the proposal list.
04/07/2020 We have uploaded the I3D features on Anet, the training configurations files in data/dataset_cfg.yaml and the proposal lists for Anet.
Contents
Usage Guide
Prerequisites
The training and testing in PGCN is reimplemented in PyTorch for the ease of use.
Other minor Python modules can be installed by running
pip install -r requirements.txt
Code and Data Preparation
Get the code
Clone this repo with git, please remember to use --recursive
git clone --recursive https://github.com/Alvin-Zeng/PGCN
Download Datasets
We support experimenting with two publicly available datasets for temporal action detection: THUMOS14 & ActivityNet v1.3. Here are some steps to download these two datasets.
- THUMOS14: We need the validation videos for training and testing videos for testing. You can download them from the THUMOS14 challenge website.
- ActivityNet v1.3: this dataset is provided in the form of YouTube URL list. You can use the official ActivityNet downloader to download videos from the YouTube.
Download Features
Here, we provide the I3D features (RGB+Flow) for training and testing.
THUMOS14: You can download it from Google Cloud or Baidu Cloud.
Anet: You can download the I3D Flow features from Baidu Cloud (password: jbsa) and the I3D RGB features from Google Cloud (Note: set the interval to 16 in ops/I3D_Pooling_Anet.py when training with RGB features)
Download Proposal Lists (ActivityNet)
Here, we provide the proposal lists for ActivityNet 1.3. You can download them from Google Cloud
Training PGCN
Plesse first set the path of features in data/dataset_cfg.yaml
train_ft_path: $PATH_OF_TRAINING_FEATURES
test_ft_path: $PATH_OF_TESTING_FEATURES
Then, you can use the following commands to train PGCN
python pgcn_train.py thumos14 --snapshot_pre $PATH_TO_SAVE_MODEL
After training, there will be a checkpoint file whose name contains the information about dataset and the number of epoch. This checkpoint file contains the trained model weights and can be used for testing.
Testing Trained Models
You can obtain the detection scores by running
sh test.sh TRAINING_CHECKPOINT
Here, TRAINING_CHECKPOINT
denotes for the trained model.
This script will report the detection performance in terms of mean average precision at different IoU thresholds.
The trained models and evaluation results are put in the "results" folder.
You can obtain the two-stream results on THUMOS14 by running
sh test_two_stream.sh
THUMOS14
mAP@0.5IoU (%) | RGB | Flow | RGB+Flow |
---|---|---|---|
P-GCN (I3D) | 37.23 | 47.42 | 49.07 (49.64) |
#####Here, 49.64% is obtained by setting the combination weights to Flow:RGB=1.2:1 and nms threshold to 0.32
Other Info
Citation
Please cite the following paper if you feel PGCN useful to your research
@inproceedings{PGCN2019ICCV,
author = {Runhao Zeng and
Wenbing Huang and
Mingkui Tan and
Yu Rong and
Peilin Zhao and
Junzhou Huang and
Chuang Gan},
title = {Graph Convolutional Networks for Temporal Action Localization},
booktitle = {ICCV},
year = {2019},
}
Contact
For any question, please file an issue or contact
Runhao Zeng: runhaozeng.cs@gmail.com