Home

Awesome

Graph2Net

The Official implementation for Graph2Net: Perceptually-Enriched Graph Learning for Skeleton-Based Action Recognition (TCSVT 2021).

Also, on the basis of this method, we won the first place in Multi-Modal Video Reasoning and Analyzing Competition (MMVRAC, Track 2 Skeleton-based Action Recognition) from ICCV Workshop.

<a name="Prerequisite"></a>

Prerequisite

<a name="Data"></a>

Data

Generate the Joint data

Ntu-RGB+D 60 & 120

Kinetics-400 Skeleton

Northwestern-UCLA

The preprocess of Northwestern-UCLA dataset is borrow from kchengiva/Shift-GCN.

Generate the Bone data

<a name="Training&Testing"></a>

Training&Testing

Training

We provided several examples to train Graph2Net with this repo:

Testing

We also provided several examples to test Graph2Net with this repo:

The corresponding result of the above command is as follows,

NTU-RGB+D 60 (Cross-View)Mini-Kinetics-SkeletonNorthwestern-UCLA
Joint95.242.394.4
Bone94.642.192.5

In the save_models folder, we also provide the trained model parameters.

Please refer to the config folder for other training and testing commands. You can also freely change the train or test config file according to your needs.

<a name="Ensemble"></a>

Ensemble

To ensemble the results of joints and bones, run the test command we provided to generate the scores of the softmax layer. Then combine the generated scores with:

The corresponding result of the above command is as follows,

NTU-RGB+D 60 (Cross-View)Mini-Kinetics-SkeletonNorthwestern-UCLA
Ensemble96.044.995.3

<a name="Citation"></a>

Citation

If you find this model useful for your research, please use the following BibTeX entry.

@ARTICLE{9446181,
  author={Wu, Cong and Wu, Xiao-Jun and Kittler, Josef},
  journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
  title={Graph2Net: Perceptually-Enriched Graph Learning for Skeleton-Based Action Recognition}, 
  year={2021},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TCSVT.2021.3085959}
}

<a name="Acknowledgement"></a>

Acknowledgement

Thanks for the framework provided by 2s-AGCN, which is source code of the published work Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR 2019.

<a name="Contact"></a>

Contact

For any questions, feel free to contact: congwu@stu.jiangnan.edu.cn.