Awesome
EM-POSE: 3D Human Pose Estimation from Sparse Electromagnetic Trackers
This repository contains the code to our paper published at ICCV 2021. For questions feel free to open an issue or send an e-mail to manuel.kaufmann@inf.ethz.ch.
Installation
Code
This code was tested on Windows 10 with Python 3.7, PyTorch 1.6 and CUDA 10.1. To manage your environment Anaconda or Miniconda is recommended.
git clone https://github.com/facebookresearch/em-pose.git
cd em-pose
conda create -n empose python=3.7
conda activate empose
pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
python setup.py develop
The code is structured following the recommended python package layout. I.e. the actual source code is located under empose and grouped into sub-packages. Scripts that use the code (e.g. evaluation, training etc.) are located under scripts. More details on how to run evaluation and training scripts follow below. Before you can run these scripts you need to download some additional data and define a few environment variables as outlined next.
SMPL Model
This code uses the neutral SMPL-H model which is also used by AMASS. To download the model head over to the official MANO website and download the Extended SMPL+H model
on the download page. Copy the contents of this model into a folder of your choice and set the environment variable $SMPL_MODELS
. The code expects the neutral SMPL model to be located under $SMPL_MODELS/smplh_amass/neutral/model.npz
.
EM-POSE Dataset
You can download our dataset from here (roughly 100 MB). Unzip the content into a directory of your choice and set the environment variable $EM_DATA_REAL
to this directory.
The expected directory structure is:
$EM_DATA_REAL
|- 0402_arms_fast_M_clean.npz
|- 0402_arms_M_clean.npz
|- ...
|- 0402_offsets.npz
|- hold_out
|- ...
|- 0715_walking_M_clean.npz
The initial 4 digits identify the participant. There are 5 participants with IDs 0402
, 0526
, 0612
, 0714
, 0715
. Subject 0715
is the hold out subject.
Pre-Trained Models
You can download the pre-trained models from here (roughly 500 MB). Unzip the content into a directory of your choice and set the environment variable $EM_EXPERIMENTS
to this directory.
The expected directory structure is
$EM_EXPERIMENTS
|- 1614785570-IEF-2x512-N4-r0.01-ws32-lr0.001-grad-n12-pos-ori
|- logs
|- cmd.txt
|- config.json
|- model.pth
|- ...
|- LICENSE.txt
The first 10 digits of each folder name identify the model. The rest is an autogenerated summary string. Each model folder contains tensorboard log files (logs
), a configuration file config.json
specifying the parameter choices, a cmd.txt
file containing the actual command used to train the model and the model's weights model.pth
.
We provide the following pre-trained models (name according to the naming convention in our paper).
Model Name | N Sensors | Model ID |
---|---|---|
ResNet | 6 | 1614876822 |
ResNet | 12 | 1614876778 |
BiRNN | 6 | 1614861176 |
BiRNN | 12 | 1614855569 |
LGD | 6 | 1615631965 |
LGD | 12 | 1614785570 |
LGD RNN | 6 | 1615631737 |
LGD RNN | 12 | 1615200973 |
Here the LGD
entry refers to the model that does not use an RNN which we refer to in the ablation studies as Ours no RNN
(Table 3 and 7).
AMASS and 3DPW (Optional)
You do not need to download AMASS or 3DPW for the evaluation code to work. You only need them if you want to train a model from scratch. To download the datasets please visit the official AMASS website and the official 3DPW website.
Set the environment variable $EM_DATA_SYNTH
to a directory of your choice and extract both AMASS and 3DPW into this directory, such that the structure looks like this:
$EM_DATA_SYNTH
|- amass
|- ACCAD
|- BioMotionLab_NTroje
|- ...
|- 3dpw
|- test
|- train
|- ...
Evaluation
To evaluate a pre-trained models on the EM-POSE dataset, run
python scripts/evaluate_real.py --model_id <MODEL_ID>
For example to evaluate the model LGD RNN 6
, execute
python scripts/evaluate_real.py --model_id 1615631737
<details>
<summary>You should see the following output where the last row "Overall Average" corresponds to the numbers reported in the paper in Table 2 (click to expand):</summary>
```
Evaluate 0402_arms_M (3460 frames)
Evaluate 0402_arms_fast_M (1937 frames)
Evaluate 0402_calibration_M (688 frames)
Evaluate 0402_head_and_shoulders_M (2213 frames)
Evaluate 0402_jumping_jacks_M (490 frames)
Evaluate 0402_lower_body_M (1630 frames)
Evaluate 0402_lunges_flooring_M (801 frames)
Evaluate 0402_sitting_M (1945 frames)
Evaluate 0402_walking_M (1875 frames)
Evaluate 0526_arm_M (2916 frames)
Evaluate 0526_arms_fast_M (1311 frames)
Evaluate 0526_calibration_M (745 frames)
Evaluate 0526_head_and_shoulders_M (1796 frames)
Evaluate 0526_jumping_jacks_M (246 frames)
Evaluate 0526_lower_body_M (1423 frames)
Evaluate 0526_lunges_flooring_M (1331 frames)
Evaluate 0526_sitting_M (1647 frames)
Evaluate 0526_walking_M (1569 frames)
Evaluate 0612_arm_M (2931 frames)
Evaluate 0612_calibration_M (596 frames)
Evaluate 0612_fast_arms_M (1421 frames)
Evaluate 0612_head_and_shoulders_M (1846 frames)
Evaluate 0612_jumping_jacks_M (296 frames)
Evaluate 0612_lower_body_M (1191 frames)
Evaluate 0612_lunges_flooring_M (560 frames)
Evaluate 0612_sitting_M (1736 frames)
Evaluate 0612_walking_M (1677 frames)
Evaluate 0714_arms_M (2458 frames)
Evaluate 0714_calibration_M (779 frames)
Evaluate 0714_fast_arm_motions_M (1269 frames)
Evaluate 0714_head_and_shoulders_M (2002 frames)
Evaluate 0714_jumping_jacks_M (504 frames)
Evaluate 0714_lower_body_M (1600 frames)
Evaluate 0714_lunges_M (1191 frames)
Evaluate 0714_sitting_M (2303 frames)
Evaluate 0714_walking_M (1647 frames)
Nr E2E 1615631737 MPJPE [mm] MPJPE STD PA-MPJPE [mm] PA-MPJPE STD MPJAE [deg] MPJAE STD
---- ------------------------- ------------ ----------- --------------- -------------- ------------- -----------
0 0402_arms_M 34.1515 12.9531 27.1362 13.3624 11.2861 9.11052
1 0402_arms_fast_M 32.6637 15.8568 25.4223 15.8446 13.5105 10.6675
2 0402_calibration_M 28.2019 13.2145 21.3586 12.5979 13.1766 7.29259
3 0402_head_and_shoulders_M 36.1447 20.6649 27.6954 18.3009 17.4024 14.751
4 0402_jumping_jacks_M 35.1761 17.6392 27.0485 17.5012 14.9125 9.27871
5 0402_lower_body_M 35.3844 21.7737 28.1301 18.168 17.3212 11.1513
6 0402_lunges_flooring_M 40.9523 23.9404 33.5014 18.1408 19.0084 10.1219
7 0402_sitting_M 43.9436 26.3084 34.5394 21.3012 20.1697 10.1855
8 0402_walking_M 37.9355 17.2524 30.4159 15.7237 15.9998 9.3356
9 0526_arm_M 32.6111 15.6912 21.1146 10.5106 14.831 8.84095
10 0526_arms_fast_M 29.9991 17.2221 25.7273 13.9126 14.7557 9.98722
11 0526_calibration_M 44.1316 24.7647 23.0089 11.2754 11.6874 7.78501
12 0526_head_and_shoulders_M 29.077 15.8041 24.6989 14.1406 17.2931 13.3213
13 0526_jumping_jacks_M 32.1448 17.0295 28.0474 14.2634 16.2187 10.1283
14 0526_lower_body_M 33.2582 17.4339 29.2794 15.6397 16.6682 10.0953
15 0526_lunges_flooring_M 30.4026 20.4099 26.0717 15.6618 15.6433 9.57199
16 0526_sitting_M 44.2597 23.8505 35.3967 20.0461 22.3295 10.2495
17 0526_walking_M 28.1687 13.4035 24.1528 11.8434 13.7334 9.35647
18 0612_arm_M 34.5648 16.4221 24.6122 12.3456 15.4423 9.71233
19 0612_calibration_M 31.4594 15.1428 17.7726 10.2918 11.1476 4.54895
20 0612_fast_arms_M 36.9873 17.6057 26.7172 12.9997 16.1645 9.38228
21 0612_head_and_shoulders_M 35.7075 28.6527 24.0289 16.3369 14.6821 12.2912
22 0612_jumping_jacks_M 33.062 13.2578 24.0273 11.8396 12.8989 6.26127
23 0612_lower_body_M 38.8017 21.3723 24.4782 16.6152 14.6056 7.12997
24 0612_lunges_flooring_M 39.3154 23.4627 29.022 17.348 17.9581 8.89206
25 0612_sitting_M 47.4052 26.3768 34.6996 21.4821 21.9805 9.60608
26 0612_walking_M 36.9421 15.6464 23.873 10.8462 13.5021 5.9441
27 0714_arms_M 29.6988 13.1959 21.5761 11.7041 11.26 7.99208
28 0714_calibration_M 24.3456 12.865 18.3521 10.8439 8.89089 5.02336
29 0714_fast_arm_motions_M 30.6774 15.2256 24.3493 12.9823 11.5618 8.05985
30 0714_head_and_shoulders_M 31.166 26.1263 23.5051 17.4878 13.0188 9.86883
31 0714_jumping_jacks_M 29.7056 17.3684 22.1885 11.7989 11.0778 5.69278
32 0714_lower_body_M 33.5118 18.638 27.5352 16.4741 12.8513 7.78277
33 0714_lunges_M 40.6014 22.7803 29.609 17.541 13.2479 6.59292
34 0714_sitting_M 50.9458 38.519 38.8948 19.855 15.987 7.80323
35 0714_walking_M 29.2543 13.2963 24.1425 12.1057 10.4497 5.38686
36 Overall average 35.435 21.3132 26.9621 16.2607 14.8941 9.9549
```
</details>
You can replace model ID with any of the available pre-trained model IDs listed above. Note that this command evaluates the model on participants 1-4 (i.e. 0402
, 0526
, 0612
, and 0714
). To evaluate on the hold out participant 0715
, run
python scripts/evaluate_real.py --model_id 1615631737 --cross_subject
This should re-create the entries of Table 4 in the paper. Please note however that we have spotted an error in the pre-processing of the data of participant 0715
. The error is fixed in the released version of the data. Running our models on the fixed data produces the following results (these are the results that you should get with the above command as well).
Model | MPJPE [mm] | PA-MPJPE [mm] | MPJAE [deg] |
---|---|---|---|
BiRNN 6 | 37.2 +/- 26.7 | 33.8 +/- 19.2 | 15.0 +/- 7.8 |
Ours (LGD RNN) 6 | 32.0 +/- 25.0 | 29.5 +/- 17.7 | 13.6 +/- 7.3 |
BiRNN 12 | 45.9 +/- 34.3 | 40.2 +/- 22.7 | 15.1 +/- 8.0 |
Ours (LGD RNN) 12 | 31.2 +/- 25.7 | 24.5 +/- 18.0 | 12.3 +/- 7.2 |
We will release an errata to update the paper soon.
Training
To train our models from scratch you must first preprocess AMASS and 3DPW. We are using AMASS as the training dataset and 3DPW for validation. The preprocessing script resamples all sequences to 60 fps and stores them in LMDB format. To do so simply run
python scripts/preprocess_amass_3dpw.py
This script assumes the data has been downloaded and stored following the structure outlined in the installation section above. When the script has successfully completed, you should see two new directories $EM_DATA_SYNTH/amass_lmdb
and $EM_DATA_SYNTH/3dpw_lmdb
.
If that is the case you can run a training from scratch using the script scripts/train.py
. This script accepts all configuration parameters that are listed in the class Configuration
in file configuration.py
. Logs and model weights will be stored in a new directory under $EM_EXPERIMENTS
.
To re-train one of our models, use the configuration parameters as specified in the file cmd.txt
stored in the respective model directory. For example, to re-train LGD RNN 6
use
python train.py --bs_train 12 --bs_eval 12 --m_type ief --m_hidden_size 512 --m_num_layers 2 --m_num_iterations 2 --window_size 32 --use_marker_pos --use_marker_ori --use_real_offsets --offset_noise_level 0 --m_average_shape --m_use_gradient --eval_every 700 --n_epochs 50 --m_reprojection_loss_weight 0.01 --eval_window_size 256 --m_rnn_init --m_rnn_hidden_size 512 --lr 0.0005 --n_markers 6 --m_pose_loss_weight 10.0 --m_fk_loss 0.1
<details>
<summary>You should see something like the following output (click to expand):</summary>
```
Model created with 5721419 trainable parameters
Saving checkpoints to $EM_EXPERIMENTS\1633700413-IEF-2x512-N2-RNN-2x512-r0.01-ws32-lr0.0005-grad-n6-pos-ori\model.pth
[TRAIN 00001 | 001] pose: 0.200803 shape: 1.002613 reconstruction: 8.376869 fk: 4.107896 total_loss: 3.505206 elapsed: 0.700 secs
[VALID 00001 | 001] pose: 0.195422 shape: 0.571689 reconstruction: 8.942121 fk: 4.252811 total_loss: 3.040612 elapsed: 3.117 secs
[TEST 00001 | 001] pose: 0.250684 shape: 1.718055 reconstruction: 6.721277 fk: 3.119133 total_loss: 4.604024 elapsed: 73.173 secs ***
Model MPJPE [mm] MPJPE STD PA-MPJPE [mm] PA-MPJPE STD MPJAE [deg] MPJAE STD
---------------- ------------ ----------- --------------- -------------- ------------- -----------
1633700413 VALID 188.592 214.838 153.96 108.071 43.7971 34.7947
Model MPJPE [mm] MPJPE STD PA-MPJPE [mm] PA-MPJPE STD MPJAE [deg] MPJAE STD
--------------- ------------ ----------- --------------- -------------- ------------- -----------
1633700413 TEST 160.977 201.684 127.444 111.748 35.7287 33.0245
...
```
</details>
Visualization
We are working to publish visualization code in a separate repository - stay tuned!
License
Copyright (c) Facebook, Inc. and its affiliates, ETH Zurich, Manuel Kaufmann.
EM-POSE is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. You should have received a copy of the license along with this work. If not, see http://creativecommons.org/licenses/by-nc-sa/4.0/.
Citation
If you use code or data from this repository please consider citing:
@inProceedings{kaufmann2021empose,
title={EM-POSE: 3D Human Pose Estimation from Sparse Electromagnetic Trackers},
author={Kaufmann, Manuel and Zhao, Yi and Tang, Chengcheng and Tao, Lingling and Twigg, Christopher and Song, Jie and Wang, Robert and Hilliges, Otmar,
booktitle={The IEEE International Conference on Computer Vision (ICCV)},
month={Oct},
year={2021}
}