Home

Awesome

MV-IGNet

The Official PyTorch implementation of "Learning Multi-View Interactional Skeleton Graph for Action Recognition" [IEEEXplore] in TPAMI 2020. The arXiv version of our paper is coming soon.

Contents

<!-- - [MV-IGNet](#mv-ignet) -->
  1. Current Status
  2. Overview and Advantages
  3. Requirements
  4. Installation
  5. Data Preparation
  6. Training
  7. Evaluation
  8. Results
  9. Citation
  10. Acknowledgement

Current Status

Overview and Advantages


Requirements

We only test our code on the following environment:

Installation

# Install python environment
$ conda create -n mvignet python=3.7
$ conda activate mvignet

# Install Pytorch 1.2.0 with CUDA 10.0 or 10.1
$ pip install torch==1.2.0 torchvision==0.4.0

# Download our code
$ git clone https://github.com/niais/mv-ignet
$ cd mv-ignet

# Install torchlight
$ cd torchlight; python setup.py install; cd ..

# Install other python libraries
$ pip install -r requirements.txt

Data Preparation

Training

Evaluation

Results

The expected Top-1 accuracy results on NTU-RGB+D 60 dataset are shown here:

ModelCross View (%)Cross Subject (%)
ST-GCN88.881.6
SPGNet94.386.8
HPGNet94.787.2
MV-HPGNet95.888.6

Citation

Please cite our paper if you find this repo useful in your resesarch:

@article{wang2020learning,
  title={Learning Multi-View Interactional Skeleton Graph for Action Recognition},
  author={Wang, Minsi and Ni, Bingbing and Yang, Xiaokang},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2020},
  publisher={IEEE}
}

Acknowledgement

The framework of current code is based on the old version of ST-GCN (Its new version is MMSkeleton).