Home

Awesome

GrabNet

Generating realistic hand mesh grasping unseen 3D objects (ECCV 2020)

report

Run in Google-Colab

Open In Google-Colab

GRAB-Teaser [Paper Page] [Paper ]

GrabNet is a generative model for 3D hand grasps. Given a 3D object mesh, GrabNet can predict several hand grasps for it. GrabNet has two succesive models, CoarseNet (cVAE) and RefineNet. It is trained on a subset (right hand and object only) of GRAB dataset. For more details please refer to the Paper or the project website.

Below you can see some generated results from GrabNet:

BinocularsMugCameraToothpaste
BinocularsMugCameraToothpaste

Check out the YouTube videos below for more details.

Long VideoShort Video
LongVideoShortVideo

Table of Contents

Description

This implementation:

Requirements

This package has the following requirements:

Installation

To install the dependencies please follow the next steps:

Getting started

For a quick demo of GrabNet you can give it a try on google-colab here.

Inorder to use the GrabNet model please follow the below steps:

CoarseNet and RefineNet models

    GrabNet
        ├── grabnet
        │    │
        │    ├── models
        │    │     └── coarsenet.pt
        │    │     └── refinenet.pt
        │    │     │

Mano models

GrabNet data (only required for retraining the model or testing on the test objects)

    GRAB
    ├── data
    │    │
    │    ├── bps.npz
    │    └── obj_info.npy
    │    └── sbj_info.npy
    │    │
    │    └── [split_name] from (test, train, val)
    │          │
    │          └── frame_names.npz
    │          └── grabnet_[split_name].npz
    │          └── data
    │                └── s1
    │                └── ...
    │                └── s10
    └── tools
         │
         ├── object_meshes
         └── subject_meshes

Examples

After installing the GrabNet package, dependencies, and downloading the data and the models from mano website, you should be able to run the following examples:

Citation

@inproceedings{GRAB:2020,
  title = {{GRAB}: A Dataset of Whole-Body Human Grasping of Objects},
  author = {Taheri, Omid and Ghorbani, Nima and Black, Michael J. and Tzionas, Dimitrios},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year = {2020},
  url = {https://grab.is.tue.mpg.de}
}

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions in the LICENSE file and any accompanying documentation before you download and/or use the GRAB data, model and software, (the "Data & Software"), including 3D meshes (body and objects), images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Acknowledgments

Special thanks to Mason Landry for his invaluable help with this project.

We thank:

Contact

The code of this repository was implemented by Omid Taheri and Nima Ghorbani.

For questions, please contact grab@tue.mpg.de.

For commercial licensing (and all related questions for business applications), please contact ps-licensing@tue.mpg.de.