Home

Awesome

MotionBERT: A Unified Perspective on Learning Human Motion Representations

<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a> arXiv <a href="https://motionbert.github.io/"><img alt="Project" src="https://img.shields.io/badge/-Project%20Page-lightgrey?logo=Google%20Chrome&color=informational&logoColor=white"></a> <a href="https://youtu.be/slSPQ9hNLjM"><img alt="Demo" src="https://img.shields.io/badge/-Demo-ea3323?logo=youtube"></a> Hugging Face Models

PWC PWC PWC

This is the official PyTorch implementation of the paper "MotionBERT: A Unified Perspective on Learning Human Motion Representations" (ICCV 2023).

<img src="https://motionbert.github.io/assets/teaser.gif" alt="" style="zoom: 60%;" />

Installation

conda create -n motionbert python=3.7 anaconda
conda activate motionbert
# Please install PyTorch according to your CUDA version.
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -r requirements.txt

Getting Started

TaskDocument
Pretraindocs/pretrain.md
3D human pose estimationdocs/pose3d.md
Skeleton-based action recognitiondocs/action.md
Mesh recoverydocs/mesh.md

Applications

In-the-wild inference (for custom videos)

Please refer to docs/inference.md.

Using MotionBERT for human-centric video representations

'''	    
  x: 2D skeletons 
    type = <class 'torch.Tensor'>
    shape = [batch size * frames * joints(17) * channels(3)]
    
  MotionBERT: pretrained human motion encoder
    type = <class 'lib.model.DSTformer.DSTformer'>
    
  E: encoded motion representation
    type = <class 'torch.Tensor'>
    shape = [batch size * frames * joints(17) * channels(512)]
'''
E = MotionBERT.get_representation(x)

Hints

  1. The model could handle different input lengths (no more than 243 frames). No need to explicitly specify the input length elsewhere.
  2. The model uses 17 body keypoints (H36M format). If you are using other formats, please convert them before feeding to MotionBERT.
  3. Please refer to model_action.py and model_mesh.py for examples of (easily) adapting MotionBERT to different downstream tasks.
  4. For RGB videos, you need to extract 2D poses (inference.md), convert the keypoint format (dataset_wild.py), and then feed to MotionBERT (infer_wild.py).

Model Zoo

<img src="https://motionbert.github.io/assets/demo.gif" alt="" style="zoom: 50%;" />
ModelDownload LinkConfigPerformance
MotionBERT (162MB)OneDrivepretrain/MB_pretrain.yaml-
MotionBERT-Lite (61MB)OneDrivepretrain/MB_lite.yaml-
3D Pose (H36M-SH, scratch)OneDrivepose3d/MB_train_h36m.yaml39.2mm (MPJPE)
3D Pose (H36M-SH, ft)OneDrivepose3d/MB_ft_h36m.yaml37.2mm (MPJPE)
Action Recognition (x-sub, ft)OneDriveaction/MB_ft_NTU60_xsub.yaml97.2% (Top1 Acc)
Action Recognition (x-view, ft)OneDriveaction/MB_ft_NTU60_xview.yaml93.0% (Top1 Acc)
Mesh (with 3DPW, ft)OneDrivemesh/MB_ft_pw3d.yaml88.1mm (MPVE)

In most use cases (especially with finetuning), MotionBERT-Lite gives a similar performance with lower computation overhead.

TODO

Citation

If you find our work useful for your project, please consider citing the paper:

@inproceedings{motionbert2022,
  title     =   {MotionBERT: A Unified Perspective on Learning Human Motion Representations}, 
  author    =   {Zhu, Wentao and Ma, Xiaoxuan and Liu, Zhaoyang and Liu, Libin and Wu, Wayne and Wang, Yizhou},
  booktitle =   {Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year      =   {2023},
}