Home

Awesome

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Introduction

This is the official code of Deep Dual Consecutive Network for Human Pose Estimation.

Multi-frame human pose estimation in complicated situations is challenging. Although state-of-the-art human joints detectors have demonstrated remarkable results for static images, their performances come short when we apply these models to video sequences. Prevalent shortcomings include the failure to handle motion blur, video defocus, or pose occlusions, arising from the inability in capturing the temporal dependency among video frames. On the other hand, directly employing conventional recurrent neural networks incurs empirical difficulties in modeling spatial contexts, especially for dealing with pose occlusions. In this paper, we propose a novel multi-frame human pose estimation framework, leveraging abundant temporal cues between video frames to facilitate keypoint detection. Three modular components are designed in our framework. A Pose Temporal Merger encodes keypoint spatiotemporal context to generate effective searching scopes while a Pose Residual Fusion module computes weighted pose residuals in dual directions. These are then processed via our Pose Correction Network for efficient refining of pose estimations. Our method ranks No.1 in the Multi-frame Person Pose Estimation Challenge on the large-scale benchmark datasets PoseTrack2017 and PoseTrack2018. We have released our code, hoping to inspire future research.

Visual Results

<video id="video" controls="" preload="none"> <source id="mp4" src="docs/DCPoseShow.mp4" type="video/mp4"> </video>

On PoseTrack

<p align='center'> <img src="./docs/gifs/val_1.gif" style="zoom:100%;" /> <img src="./docs/gifs/val_2.gif" style="zoom:100%;" /> </p> <p align='center'> <img src="./docs/gifs/val_3.gif" style="zoom:100%;" /> <img src="./docs/gifs/val_4.gif" style="zoom:100%;" /> </p> <p align='center'> <img src="./docs/gifs/val_5.gif" style="zoom:100%;" /> <img src="./docs/gifs/val_6.gif" style="zoom:100%;" /> </p> <p align='center'> <img src="./docs/gifs/val_7.gif" style="zoom:100%;" /> <img src="./docs/gifs/val_8.gif" style="zoom:100%;" /> </p>

Comparison with SOTA method

<img src="./docs/gifs/con_1.gif" style="zoom:120%;" /> <img src="./docs/gifs/con_2.gif" style="zoom:144%;" />

Experiments

Results on PoseTrack 2017 validation set

MethodHeadShoulderElbowWristHipKneeAnkleMean
PoseFlow66.773.368.361.167.567.061.366.5
JointFlow-------69.3
FastPose80.080.369.559.171.467.559.470.3
SimpleBaseline(2018 ECCV)81.783.480.072.475.374.867.176.7
STEmbedding83.881.677.170.077.474.570.877.0
HRNet(2019 CVPR)82.183.680.473.375.575.368.577.3
MDPN85.288.883.977.579.077.071.480.7
PoseWarper(2019 NIPS)81.488.383.978.082.480.573.681.2
DCPose88.088.784.178.483.081.474.282.8

Results on PoseTrack 2017 test set(https://posetrack.net/leaderboard.php)

MethodHeadShoulderElbowWristHipKneeAnkleTotal
PoseFlow64.967.565.059.062.562.857.963.0
JointFlow---53.1--50.463.4
KeyTrack---71.9--65.074.0
DetTrack---69.8--65.974.1
SimpleBaseline80.180.276.971.572.572.465.774.6
HRNet80.080.276.972.073.472.567.074.9
PoseWarper79.584.380.175.877.676.870.877.9
DCPose84.384.980.576.177.977.171.279.2

Results on PoseTrack 2018 validation set

MethodHeadShoulderElbowWristHipKneeAnkleMean
AlphaPose63.978.777.471.073.773.069.771.9
MDPN75.481.279.074.172.473.069.975.0
PoseWarper79.986.382.477.579.878.873.279.7
DCPose84.086.682.778.080.479.373.880.9

Results on PoseTrack 2018 test set

MethodHeadShoulderElbowWristHipKneeAnkleMean
AlphaPose++---66.2--65.067.6
DetTrack---69.8--67.173.5
MDPN---74.5--69.076.4
PoseWarper78.984.480.976.875.677.571.878.0
DCPose82.884.080.877.276.177.672.379.0

Installation & Quick Start

Check docs/installation.md for instructions on how to build DCPose from source.

Citation

@InProceedings{Liu_2021_CVPR,
    author    = {Liu, Zhenguang and Chen, Haoming and Feng, Runyang and Wu, Shuang and Ji, Shouling and Yang, Bailin and Wang, Xun},
    title     = {Deep Dual Consecutive Network for Human Pose Estimation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {525-534}
}