Home

Awesome

MSR-GCN

PWC

Official implementation of MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction (ICCV 2021 paper)

[Paper] [Supp] [Poster] [Slides] [Video]

Notes:

Authors

<!-- <div style="display:flex;flex-direction:row;flex-wrap:wrap;justify-content:space-around;align-items:center;"> <div style="display:flex;flex-direction:column;flex-wrap:wrap;justify-content:center;align-items:center;"> <a href="https://github.com/Droliven" style="text-align: center;"><img src="./assets/lingweidang.png" width="40%"></a> <p> <a href="https://github.com/Droliven">[Lingwei Dang]</a> </p> </div> <div style="display:flex;flex-direction:column;flex-wrap:wrap;justify-content:center;align-items:center;"> <a href="https://nieyongwei.net" style="text-align: center;"><img src="./assets/yongweinie.png" width="40%"></a> <p> <a href="https://nieyongwei.net">[Yongwei Nie]</a> </p> </div> <div style="display:flex;flex-direction:column;flex-wrap:wrap;justify-content:center;align-items:center;"> <a href="http://www.chengjianglong.com" style="text-align: center;"><img src="./assets/chengjianglong.png" width="60%"></a> <p> <a href="http://www.chengjianglong.com">[Chengjiang Long]</a> </p> </div> <div style="display:flex;flex-direction:column;flex-wrap:wrap;justify-content:center;align-items:center;"> <a href="http://zhangqing-home.net/" style="text-align: center;"><img src="./assets/qingzhang.png" width="40%"></a> <p> <a href="http://zhangqing-home.net/">[Qing Zhang]</a> </p> </div> <div style="display:flex;flex-direction:column;flex-wrap:wrap;justify-content:center;align-items:center;"> <a href="http://www2.scut.edu.cn/cs/2017/0629/c22284a328097/page.htm" style="text-align: center;"><img src="./assets/guiqingli.png" width="40%"></a> <p> <a href="http://www2.scut.edu.cn/cs/2017/0629/c22284a328097/page.htm">[Guiqing Li]</a> </p> </div> </div> -->
  1. Lingwei Dang, School of Computer Science and Engineering, South China University of Technology, China, levondang@163.com
  2. Yongwei Nie, School of Computer Science and Engineering, South China University of Technology, China, nieyongwei@scut.edu.cn
  3. Chengjiang Long, JD Finance America Corporation, USA, cjfykx@gmail.com
  4. Qing Zhang, School of Computer Science and Engineering, Sun Yat-sen University, China, zhangqing.whu.cs@gmail.com
  5. Guiqing Li, School of Computer Science and Engineering, South China University of Technology, China, ligq@scut.edu.cn

Overview

<a href="./assets/7627-poster.pdf"> <img src="./assets/7627-poster.png" /> </a>

    Human motion prediction is a challenging task due to the stochasticity and aperiodicity of future poses. Recently, graph convolutional network (GCN) has been proven to be very effective to learn dynamic relations among pose joints, which is helpful for pose prediction. On the other hand, one can abstract a human pose recursively to obtain a set of poses at multiple scales. With the increase of the abstraction level, the motion of the pose becomes more stable, which benefits pose prediction too. In this paper, we propose a novel multi-scale residual Graph Convolution Network (MSR-GCN) for human pose prediction task in the manner of end-to-end. The GCNs are used to extract features from fine to coarse scale and then from coarse to fine scale. The extracted features at each scale are then combined and decoded to obtain the residuals between the input and target poses. Intermediate supervisions are imposed on all the predicted poses, which enforces the network to learn more representative features. Our proposed approach is evaluated on two standard benchmark datasets, i.e., the Human3.6M dataset and the CMU Mocap dataset. Experimental results demonstrate that our method outperforms the state-of-the-art approaches.

Dependencies

Get the data

Human3.6m in exponential map can be downloaded from here.

CMU mocap was obtained from the repo of ConvSeq2Seq paper.

About datasets

Human3.6M

CMU Mocap dataset

Train

Evaluate and visualize results

Results

H3.6M-10/25/35-all801603204005601000-
walking12.1622.6538.6545.2452.7263.05-
eating8.3917.0533.0340.4452.5477.11-
smoking8.0216.2731.3238.1549.4571.64-
discussion11.9826.7657.0869.7488.59117.59-
directions8.6119.6543.2853.8271.18100.59-
greeting16.4836.9577.3293.38116.24147.23-
phoning10.1020.7441.5151.2668.28104.36-
posing12.7929.3866.9585.01116.26174.33-
purchases14.7532.3966.1379.63101.63139.15-
sitting10.5321.9946.2657.8078.19120.02-
sittingdown16.1031.6362.4576.84102.83155.45-
takingphoto9.8921.0144.5656.3077.94121.87-
waiting10.6823.0648.2559.2376.33106.25-
walkingdog20.6542.8880.3593.31111.87148.21-
walkingtogether10.5620.9237.4043.8552.9365.91-
Average12.1125.5651.6462.9381.13114.1857.93

Results use the metric like MotionMixer, IJCAI22

H3.6M-10/25/35-256<=80<=160<=320<=400<=560<=1000
walking9.5415.3624.8928.8935.2444.99
eating5.889.9417.7621.4828.5844.71
smoking6.3910.6618.7822.5829.4344.23
discussion8.8115.5529.8136.6649.0674.06
directions6.6812.224.7831.0542.265.19
greeting11.3519.8337.6946.160.9889.2
phoning7.5612.6922.9127.9237.5760.16
posing8.7716.1132.9441.6958.6699.05
purchases10.9619.3936.2243.957.685.08
sitting7.9613.4725.3431.242.3867.88
sittingdown13.221.5237.0244.358.2589.99
takingphoto7.1812.4523.8129.540.9568.61
waiting7.6313.1425.1931.0741.7664.19
walkingdog14.9725.6644.852.6166.2593.61
walkingtogether8.0413.523.1727.3934.6647.19
average8.9915.4328.3434.4245.5769.21

CMU-10/25/35-all801603204005601000-
basketball10.2418.6436.9445.9661.1286.24-
basketball_signal3.045.6212.4916.6025.4349.99-
directing_traffic6.1312.6029.3739.2260.46114.56-
jumping15.1928.8555.9769.1192.38126.16-
running13.1720.9129.8833.3738.2643.62-
soccer10.9219.4037.4147.0065.25101.85-
walking6.3810.2516.8820.0525.4836.78-
washwindow5.4110.9324.5131.7945.1370.16-
Average8.8115.9030.4337.8951.6978.6737.23

Citation

If you use our code, please cite our work

@InProceedings{Dang_2021_ICCV,
    author    = {Dang, Lingwei and Nie, Yongwei and Long, Chengjiang and Zhang, Qing and Li, Guiqing},
    title     = {MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {11467-11476}
}

Acknowledgments

Some of our evaluation code and data process code was adapted/ported from LearnTrajDep by Wei Mao.

Licence

MIT