Awesome
Full-Body Awareness from Partial Observations (ECCV 2020)
Chris Rockwell and David F. Fouhey
[Project Website] [Paper] [Supplemental]
Fig. 1: We present a simple but highly effective framework for adapting human pose estimation methods to highly truncated settings that requires no additional pose annotation. We evaluate the approach on HMR and CMR by annotating four Internet video test sets: VLOG (top-left, top-middle), Cross-Task (top-right, bottom-left), YouCookII (bottom-middle), and Instructions (bottom-right).
Model Installation, Demo, Evaluation & Custom Image Setup
Annotated Test Set Setup
To get started, first download our annotated frames from the four datasets we use. We do not hold the copyright to these videos, but for ease of replication, we are making available our local copy of the data for non-commercial research purposes only. Click here to download our copies of:
Please fill out this YouCookII Google Form so we can share the download.
Place them in correspondingly into data/vlog
, data/cross_task
, data/instructions
, and data/youcook
and extract.
Data
folder can be created at any location.
Annotated Test Set Details
For each dataset, three sets will be extracted:
all
: all images annotated with keypointsuncropped keypoint
: a subset of all where the head is visible, so PCK can be evaluatedcropped keypoint
: the same subset of images asuncropped keypoint
, cropped to have similar visibility statistics toall
.
Within each set (all
, uncropped_keypoint
, cropped_keypoint
), a text file images.txt
defines list of image names, which exist in images
subfolder. They have been extracted to tf_records for use with HMR in tf_records
folder. keypoints.pkl
contains a mapping from each line in the images.txt
file to annotations. More details are available in the detailed comment in utils/calculate_pck.py
.
Evaluation
For PCK calculation, HMR and CMR models call utils/calculate_pck.py
. This code can also be used with arbitrary models. The function has a detailed comment on proper inputs. More details of evaluation, cropping, and dataset statistics are available in Supplemental. Briefly, keypoint accuracy is calculated as the average keypoint accuracy per image. Accuracy is evaluated on cropped
and uncropped
keypoint sets. Human judgments evaluate predictions on all
images.
Citation
If you use this code for your research, please consider citing:
@inProceedings{Rockwell2020,
author = {Chris Rockwell and David F. Fouhey},
title = {Full-Body Awareness from Partial Observations},
booktitle = {ECCV},
year = 2020
}
Special Thanks
Special thanks to Dimitri Zhukov, Jean-baptiste Alayrac, and Luowei Zhou, for allowing us to share privately frames from their respective datasets: Cross-Task, Instructions, and YouCookII. Thanks to Angjoo Kanazawa and Nikos Kolotouros for polished model repositories to easily extend their respective HMR and CMR models.