Awesome
SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos
ProjectPage | Paper | Video | Dataset (Part01|Part02)
Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu.</br>
This repository contains the official implementation for the paper: SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos (IJCV 2021). Our work is capable of simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input.<br>
<p float="left"> <img src="./README/teaser.png" width="800" /> </p>Abstract
Markerless motion capture and understanding of professional non-daily human movements is an important yet unsolved task, which suffers from complex motion patterns and severe self-occlusion, especially for the monocular setting. In this paper, we propose SportsCap -- the first approach for simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input. Our approach utilizes the semantic and temporally structured sub-motion prior in the embedding space for motion capture and understanding in a data-driven multi-task manner. Comprehensive experiments on both public and our proposed datasets show that with a challenging monocular sports video input, our novel approach not only significantly improves the accuracy of 3D human motion capture, but also recovers accurate fine-grained semantic action attributes.
Licenses
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/80x15.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
All material is made available under Creative Commons BY-NC-SA 4.0 license. You can use, redistribute, and adapt the material for non-commercial purposes, as long as you give appropriate credit by citing our paper and indicating any changes that you've made.
The SMART Dataset
SportsCap proposes a challenging sports dataset called Sports Motion and Recognition Tasks (SMART) dataset, which contains per-frame action labels, manually annotated pose, and action assessment of various challenging sports video clips from professional referees.
<p float="left"> <img src="./README/dataset.gif" width="800" /> </p>Download
You can download the SMART dataset (17 GB, version 1.0) from the Google Drive [SMART_part01 | SMART_part02]. The SMART dataset includes source images (>60,000), annotations(>45,000, both pose and action), sport motion embedding spaces, videos (coming soon) and tools.
Annotation
Please load these JSON files in python to parse these annotations about 2D key-points of poses and fine-grained action labels.
Table_VideoInfo_diving.json
Table_VideoInfo_gym.json
Table_VideoInfo_polevalut_highjump_badminton.json
Tools
The tools folder includes several functions to load the annotation and calculate the pose variables. More useful scripts are coming soon.
utils.py - json_load, crop_img_skes, cal_body_bbox ...
Sports Motion Embedding Spaces
With the annotated 2D poses and MoCap 3D pose data, we collect the Sports Motion Embedding Spaces (SMES), the 2D/3D pose priors for various sports. SMES provides strong prior and regularization to ensure that the generated pose result lies in the corresponding action space.
<p float="left"> <img src="./README/MES.png" width="800" /> </p>Download
You can download the Motion Embedding Spaces (SMES) (7 MB, version 1.0) separately from GoogleDrive. The released SMES-V1.0 includes many sports, like vault, uneven bar, boxing, diving, hurdles, pole vault, high jump, and so on.
Usage
Coming soon.
Citation
If you find our code or paper useful, please consider citing:
@article{chen2021sportscap,
title={SportsCap: Monocular 3D Human Motion Capture and Fine-Grained Understanding in Challenging Sports Videos},
author={Xin Chen and Anqi Pang and Wei Yang and Yuexin Ma and Lan Xu and Jingyi Yu},
journal={International Journal of Computer Vision},
year={2021},
month={Aug},
url={https://doi.org/10.1007/s11263-021-01486-4}
}
Relevant Works
ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References (CVPR Oral 2021)<br> Yannan He, Anqi Pang, Xin Chen, Han Liang, Minye Wu, Yuexin Ma, Lan Xu
TightCap: 3D Human Shape Capture with Clothing Tightness Field (TOG 2021)<br> Xin Chen, Anqi Pang, Wei Yang, Peihao Wang, Lan Xu, Jingyi Yu
AutoSweep: Recovering 3D Editable Objects from a Single Photograph (TVCG 2018)<br> Xin Chen, Yuwei Li, Xi Luo, Tianjia Shao, Jingyi Yu, Kun Zhou, Youyi Zheng
End-to-end Recovery of Human Shape and Pose (CVPR 2018)<br> Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik