Home

Awesome

This repo is the official implementation of the paper "FLEX: Full-Body Grasping Without Full-Body Grasps".

<p align="center"> <h1 align="center">FLEX: Full-Body Grasping Without Full-Body Grasps</h1> <p align="center"> <a href="https://purvaten.github.io/"><strong>Purva Tendulkar</strong></a> · <a href="https://www.didacsuris.com/"><strong>Dídac Surís</strong></a> · <a href="http://www.cs.columbia.edu/~vondrick/"><strong>Carl Vondrick</strong></a> </p> <a href=""> <img src="./images/teaser.png" alt="Logo" width="100%"> </a> <p align="center"> <br> <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a> <a href='https://arxiv.org/abs/2211.11903'> <img src='https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green' alt='Paper PDF'> </a> <a href='https://flex.cs.columbia.edu/' style='padding-left: 0.5rem;'> <img src='https://img.shields.io/badge/Project-Page-blue?style=flat&logo=Google%20chrome&logoColor=blue' alt='Project Page'><br></br>

</br>

</p> </p>

FLEX is a generative model that generates full-body avatars grasping 3D objects in a 3D environment. FLEX leverages the existence of pre-trained prior models for:

  1. Full-Body Pose - VPoser (trained on the AMASS dataset)
  2. Right-Hand Grasping - GrabNet (trained on right-handed grasps of the GRAB dataset)
  3. Pose-Ground Relation - PGPrior (trained on the AMASS dataset)

For more details please refer to the Paper or the project website.

<!-- For more details check out the YouTube video below. [![Video](images/video_teaser_play.png)](https://www.youtube.com/watch?v=A7b8DYovDZY) -->

Table of Contents

Description

This implementation:

Requirements

This package has been tested for the following:

Installation

To install the dependencies please follow the next steps:

Getting started

In order to run FLEX, create a data/ directory and follow the steps below:

ReplicaGrasp Dataset

Dependecy Files

    FLEX
    ├── data
    │   │
    │   ├── smplx_models
    │   │       ├── mano
    │   │       │     ├── MANO_LEFT.pkl
    │   │       │     ├── MANO_RIGHT.pkl
    │   │       └── smplx
    │   │             ├── SMPLX_FEMALE.npz
    │   │             └── ...
    │   ├── obj
    │   │    ├── obj_info.npy
    │   │    ├── bps.npz
    │   │    └── contact_meshes
    │   │             ├── airplane.ply
    │   │             └── ...
    │   ├── sbj
    │   │    ├── adj_matrix_original.npy
    │   │    ├── adj_matrix_simplified.npy
    │   │    ├── faces_simplified.npy
    │   │    ├── interesting.npz
    │   │    ├── MANO_SMPLX_vertex_ids.npy
    │   │    ├── sbj_verts_region_mapping.npy
    │   │    └── vertices_simplified_correspondences.npy
    │   │
    │   └── replicagrasp
    │        ├── dset_info.npz
    │        └── receptacles.npz
    .
    .

Pre-trained Checkpoints

    ckpts
    ├── vposer_amass
    │   │
    │   ├── snapshots
    │   │       └── V02_05_epoch=13_val_loss=0.03
    │   ├── V02_05.log
    │   └── V02_05.yaml
    │
    ├── coarsenet.pt
    ├── refinenet.pt
    └── pgp.pth

Examples

After installing the FLEX package, dependencies, and downloading the data and the models, you should be able to run the following examples:

Citation

@inproceedings{tendulkar2022flex,
    title = {FLEX: Full-Body Grasping Without Full-Body Grasps},
    author = {Tendulkar, Purva and Sur\'is, D\'idac and Vondrick, Carl},
    booktitle = {Conference on Computer Vision and Pattern Recognition ({CVPR})},
    year = {2023},
    url = {https://flex.cs.columbia.edu/}
}

Acknowledgments

This research is based on work partially supported by NSF NRI Award #2132519, and the DARPA MCS program under Federal Agreement No. N660011924032. Dídac Surís is supported by the Microsoft PhD fellowship. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors.

We thank: Alexander Clegg for helping with Habitat-related questions and Harsh Agrawal for helpful discussions and feedback.

This template was adapted from the GitHub repository of GOAL.

Contact

The code of this repository was implemented by Purva Tendulkar and Dídac Surís.

For questions, please contact pt2578@columbia.edu.