Home

Awesome

Learning Joint Reconstruction of Hands and Manipulated Objects - Demo, Training Code and Models

Yana Hasson, Gül Varol, Dimitris Tzionas, Igor Kalevatykh, Michael J. Black, Ivan Laptev, Cordelia Schmid, CVPR 2019

Get the code

git clone https://github.com/hassony2/obman_train cd obman_train

Download and prepare datasets

Download the ObMan dataset

Your data structure should now look like

obman_train/
  datasymlinks/ShapeNetCore.v2
  datasymlinks/obman

Download the First-Person Hand Action Benchmark dataset

Download model files

Install python dependencies

Install the MANO PyTorch layer

Download the MANO model files

obman_train/
  misc/
    mano/
      MANO_LEFT.pkl
      MANO_RIGHT.pkl
  release_models/
    fhb/
    obman/
    hands_only/

Launch

Demo

We provide a model trained on the synthetic ObMan dataset

Single image demo

python image_demo.py --resume release_models/obman/checkpoint.pth.tar

In this demo, both the original and flipped inputs are fed, and the outputs are therefore presented for the input treated as a right and a left hand side by side.

Running the demo should produce the following outputs.

<img src="readme_assets/images/can_in.png" width="20%"> <img src="readme_assets/images/can_output.png" width="40%">

You can also run this demo on data from the First Hand Action Benchmark

python image_demo.py --image_path readme_assets/images/fhb_liquid_soap.jpeg --resume release_models/fhb/checkpoint.pth.tar

<img src="readme_assets/images/fhb_liq_soap_in.png" width="20%"> <img src="readme_assets/images/fhb_liq_soap_out.png" width="40%">

Note that the model trained on First Hand Action Benchmark strongly overfits to this dataset, and therefore performs poorly on 'in the wild' images.

Video demo

You can test it on a recorded video or live using a webcam by launching :

python webcam_demo.py --resume release_models/obman/checkpoint.pth.tar --hand_side left

Hand side detection is not handled in this pipeline, therefore, you should explicitly indicate whether you want to use the right or left hand with --hand_side.

Note that the video demo has some lag time, which comes from the visualization bottleneck (matplotlib image rendering is quite slow).

Limitations

mug

Training

python traineval.py --atlas_predict_trans --atlas_predict_scale --atlas_mesh --mano_use_shape --mano_use_pca --freeze_batchnorm --atlas_separate_encoder

Citations

If you find this code useful for your research, consider citing:

@INPROCEEDINGS{hasson19_obman,
  title     = {Learning joint reconstruction of hands and manipulated objects},
  author    = {Hasson, Yana and Varol, G{\"u}l and Tzionas, Dimitris and Kalevatykh, Igor and Black, Michael J. and Laptev, Ivan and Schmid, Cordelia},
  booktitle = {CVPR},
  year      = {2019}
}

Acknowledgements

AtlasNet code

Code related to AtlasNet is in large part adapted from the official AtlasNet repository. Thanks Thibault for the provided code !

Hand evaluation code

Code for computing hand evaluation metrics was reused from hand3d, courtesy of Christian Zimmermann with an easy-to-use interface!

Laplacian regularization loss

Code for the laplacian regularization and precious advice was provided by Angjoo Kanazawa !

First Hand Action Benchmark dataset

Helpful advice to work with the dataset was provided by Guillermo Garcia-Hernando !