Home

Awesome

Awesome Maintenance GitHub Pull Requests

<!-- [![GitHub Contributors](https://img.shields.io/github/contributors/Kali-Hac/Awesome-Skeleton-Based-Models?color=green&style=plastic)](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/network/members) -->

Awesome-Skeleton-Based-Models <!-- omit in toc -->

We collect existing skeleton-based models (369+ Papers & Codes) published in prominent conferences (CVPR, ICCV, ECCV, AAAI, IJCAI, ACMMM, ICLR, ICML, NeurIPS, etc) and journals (TPAMI, IJCV, TIP, TMM, TNNLS, PMLR, etc).

TODO <!-- omit in toc -->

Contents <!-- omit in toc -->

<!-- - [5. Gesture Recognition](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/tree/main/skeleton-based-person-reID#skeleton-based-person-re-identification-s-reid) - [5.1 Datasets](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/tree/main/skeleton-based-person-reID#datasets) - [5.2 Papers/Models in 2022 (Currently 1)](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/tree/main/skeleton-based-person-reID#2022-s-reid) - [5.3 Papers/Models in 2021 (Totally 2)](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/tree/main/skeleton-based-person-reID#2021-s-reid) - [5.4 Papers/Models in 2020 (Totally 2)](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/tree/main/skeleton-based-person-reID#2020-s-reid) - [5.5 Papers/Models in 2019 (Totally 2)](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/tree/main/skeleton-based-person-reID#2019-s-reid) - [5.6 Papers/Models Before 2019 (Totally 4)](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/tree/main/skeleton-based-person-reID#before-2019-s-reid) - [5.7 Leaderboards](https://github.com/Kali-Hac/Awesome-Skeleton-Based-Models/tree/main/skeleton-based-person-reID#leaderboards) -->

Skeleton-Based Action Recognition (2013-2022)

70 Datasets

Overview of 70 available datasets for action recognition and their statistics, provided by the paper of (TPAMI 2022) [arxiv] (Human Action Recognition from Various Data Modalities: A Review).

S: Skeleton, D: Depth, IR: Infrared, PC: Point Cloud, ES: Event Stream, Au: Audio, Ac: Acceleration, Gyr: Gyroscope, EMG: Electromyography. Bold shows the most-frequently used datasets in the literature.
<!-- ![datasets](./AR-datasets.jpg) -->
# IdDatasetYearModality# Class# Subject# Sample# View
1KTH2004RGB6252,3911
2Weizmann2005RGB109901
3IXMAS2006RGB11103305
4HDM052007RGB,S13052,3371
5Hollywood2008RGB8430
6Hollywood22009RGB123,669
7MSR-Action3D2010S,D20105671
8Olympic2010RGB16783
9CAD-602011RGB,S,D12460
10HMDB512011RGB516,766
11RGB-HuDaAct2011RGB,D13301,1891
12ACT4^{2}2012RGB,D14246,8444
13DHA2012RGB,D17213571
14MSRDailyActivity3D2012RGB,S,D16103201
15UCF1012012RGB10113,320
16UTKinect2012RGB,S,D10102001
17Berkeley MHAD2013RGB,S,D,Au,Ac12126604
18CAD-1202013RGB,S,D104120
19IAS-lab2013RGB,S,D,PC15125401
20J-HMDB2013RGB,S2131,838
21MSRAction-Pair2013RGB,S,D12103601
22UCFKinect2013S16161,2801
23Multi-View TJU2014RGB,S,D20227,0402
24Northwestern-UCLA2014RGB,S,D10101,4753
25Sports-1M2014RGB487$1,113,158$
26UPCV2014S10204001
27UWA3D Multiview2014RGB,S,D3010~9004
28ActivityNet2015RGB20327,801
29SYSU 3D HOI2015RGB,S,D12404801
30THUMOS Challenge 152015RGB10124,017
31TJU2015RGB,S,D15201,2001
32UTD-MHAD2015RGB,S,D,Ac,Gyr2788611
33UWA3D Multiview II2015RGB,S,D30101,0754
34M^{2}I2015RGB,S,D2222~18002
35Charades2016RGB1572679,848
36InfAR2016IR12406002
37NTU RGB+D2016RGB,S,D,IR604056,88080
38YouTube-8M2016RGB4,8008,264,650
39AVA2017RGB80437
40DvsGesture2017ES1729
41FCVID2017RGB23991,233
42Kinetics-4002017RGB400306,245
43NEU-UB2017RGB,D620600
44PKU-MMD2017RGB,S,D,IR51661,0763
45Something-Something-v12017RGB174108,499
46UniMiB SHAR2017Ac173011,771
47EPIC-KITCHENS-552018RGB,Au3239,594Egocentric
48Kinetics-6002018RGB600495,547
49RGB-D Varying-view2018RGB,S,D4011825,6008+1(360$^{\circ}$)
50DHP192019ES,S33174
51Drive&Act2019RGB,S,D,IR83156
52Hemangomez et al.2019Radar8111,056
53Kinetics-7002019RGB700650,317
54Kitchen202019Au20800
55MMAct2019RGB,S,Ac,Gyr,etc.372036,7644+Egocentric
56Moments in Time2019RGB339~1,000,000
57Wang et al.2019WiFi CSI611,394
58NTU RGB+D 1202019RGB,S,D,IR120106114,480155
59ETRI-Activity3D2020RGB,S,D55100112,620
60EV-Action2020RGB,S,D,EMG20707,0009
61IKEA ASM2020RGB,S,D334816,7643
62RareAct2020RGB122905
63BABEL2021Mocap25213,220
64HAA5002021RGB50010,000
65HOMAGE2021RGB,IR,Ac,Gyr,etc.75271,7522~5
66MultiSports2021RGB6637,701
67UAV-Human2021RGB,S,D,IR,etc.15511967,428
68Ego4D2022RGB,Au,Ac,etc.923Egocentric
69EPIC-KICHENS-1002022RGB,Au,Ac4589,979Egocentric
70JRDB-Act2022RGB,PC263,625360$^{\circ}$

Survey Papers

2022

2021

2020

2019

2018

2017

Before 2017

arXiv papers

Skeleton-based Action Recognition under Adversarial Attack

Leaderboards on NTU-RGB+D and NTU-RGB+D 120 Datasets

NTU-RGB+D

YearMethodsCross-SubjectCross-View
2014Lie Group50.152.8
2015H-RNN59.164.0
2016Part-aware LSTM62.970.3
2016Trust Gate ST-LSTM69.277.7
2017Two-stream RNN71.379.5
2017STA-LSTM73.481.2
2017Ensemble TS-LSTM74.681.3
2017Visualization CNN76.082.6
2017C-CNN + MTLN79.684.8
2017Temporal Conv74.383.1
2017VA-LSTM79.487.6
2018Beyond Joints79.587.6
2018ST-GCN81.588.3
2018DPRL83.589.8
2019Motif-STGCN84.290.2
2018HCN86.591.1
2018SR-TSL84.892.4
2018MAN82.793.2
2019RA-GCN85.993.5
2019DenseIndRNN86.793.7
2018PB-GCN87.593.2
2019AS-GCN86.894.2
2019VA-NN (fusion)89.495.0
2019AGC-LSTM (Joint&Part)89.295.0
20192s-AGCN88.595.1
2020SGN89.094.5
2020GCN-NAS89.495.7
20192s-SDGCN89.695.7
2019DGNN89.996.1
2020MV-IGNET89.296.3
20204s Shift-GCN90.796.5
2020DecoupleGCN-DropGraph90.896.6
2020PA-ResGCN-B1990.996.0
2020MS-G3D91.596.2
2021EfficientGCN-B491.795.7
2021CTR-GCN92.496.8

NTU-RGB+D 120

YearMethodsCross-SubjectCross-Setup
2019SkeleMotion (Magnitude-Orientation)62.963.0
2019SkeleMotion + Yang et al67.766.9
2019TSRJI67.959.7
2020SGN79.281.5
2020MV-IGNET83.985.6
20204s Shift-GCN85.987.6
2020DecoupleGCN-DropGraph86.588.1
2020MS-G3D86.988.4
2020PA-ResGCN-B1987.388.3
2021EfficientGCN-B488.389.1
2021CTR-GCN88.990.6

Others

Acknowledge

If you have any problems, suggestions or improvements, please feel free to contact me (haocongrao@gmail.com). Welcome to refine current taxonomy, enrich collection of skeleton-based models, and discuss any constructive ideas.