Home

Awesome

GRAB: A Dataset of Whole-Body Human Grasping of Objects (ECCV 2020)

report

GRAB-Teaser [Paper Page ] [ArXiv Paper ]

GRAB is a dataset of full-body motions interacting and grasping 3D objects. It contains accurate finger and facial motions as well as the contact between the objects and body. It contains 5 male and 5 female participants and 4 different motion intents.

Eat - BananaTalk - PhoneDrink- MugSee - Binoculars
GRAB-TeaserGRAB-TeaserGRAB-TeaserGRAB-Teaser

The GRAB dataset also contains binary contact maps between the body and objects. With our interacting meshes, one could integrate these contact maps over time to create "contact heatmaps", or even compute fine-grained contact annotations, as shown below:

Contact HeatmapsContact Annotation
contactcontact

Check out the YouTube video below for more details.

Long VideoShort Video
LongVideoShortVideo

Table of Contents

Description

This repository Contains:

Requirements

This package has the following requirements:

Installation

To install the repo please follow the next steps:

Getting started

In order to use the GRAB dataset please follow carefully the steps below, in this exact order:

    GRAB
    ├── grab
    │   │
    │   ├── s1
    │   └── ...
    │   └── s10
    │
    └── tools
    │    ├── object_meshes
    │    └── object_settings
    │    └── subject_meshes
    │    └── subject_settings
    │    └── smplx_correspondence
    │
    └── mocap (optional)

Contents of each sequence

Each sequence name has the form object_action_*.npz, i.e. it shows the used object and the action. Also, the parent folder of each sequence shows the subjectID for that sequence. For example "grab/s4/mug_pass_1.npz" shows that subject "s4" passes the mug. The data in each sequence is structured as a dictionary with several keys which contain the corresponding information. Below we explain each of them separately.

Examples

Citation

@inproceedings{GRAB:2020,
  title = {{GRAB}: A Dataset of Whole-Body Human Grasping of Objects},
  author = {Taheri, Omid and Ghorbani, Nima and Black, Michael J. and Tzionas, Dimitrios},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year = {2020},
  url = {https://grab.is.tue.mpg.de}
}

We kindly ask you to cite Brahmbhatt et al. (ContactDB website), whose object meshes are used for our GRAB dataset, as also described in our license.

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the LICENSE file for the terms and conditions and any accompanying documentation before you download and/or use the GRAB data, model and software, (the "Data & Software"), including 3D meshes (body and objects), images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read and agreed to the LICENSE terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this LICENSE.

Acknowledgments

Special thanks to Mason Landry for his invaluable help with this project.

We thank:

Contact

The code of this repository was implemented by Omid Taheri.

For questions, please contact grab@tue.mpg.de.

For commercial licensing (and all related questions for business applications), please contact ps-licensing@tue.mpg.de.

footer