Home

Awesome

Part-based Graph Convolutional Network for Skeleton-based Action Recognition

Official repository for the code from BMVC (British Machine Vision Conference) paper "Part-based Graph Convolutional Network for Action Recognition". The implementation is done in Pytorch and works on it's recent stable version. The repository includes:

TODOs:

Getting Started

  1. Download the NTURGB+D dataset (with 60 action classes) following this link. Unzip the archive and store all the skeleton files in a single directory:
unzip nturgbd_skeletons_s001_to_s017.zip -d nturgb+d_skeletons
  1. Clone the repository:
git clone https://github.com/kalpitthakkar/pb-gcn.git
  1. Download the pretrained model checkpoints. To download the checkpoints for both the cross-subject and cross-view splits:
bash download_checkpoints.sh <path_to_download_directory>

Training / Testing Models

  1. First of all, we need to define the required configuration variables in the YAML file. The instructions on it's structure and editing the file are here.

  2. Once the configuration file is ready, you can start training the model:

python run.py --config <path_to_YAML_config_file>

Citation

For citing our paper:

@article{thakkar2018part,
title={Part-based Graph Convolutional Network for Action Recognition},
author={Thakkar, Kalpit and Narayanan, PJ},
journal={arXiv preprint arXiv:1809.04983},
year={2018}
}