Home

Awesome

DexYCB Toolkit

DexYCB Toolkit is a Python package that provides evaluation and visualization tools for the DexYCB dataset. The dataset and results were initially described in a CVPR 2021 paper:

DexYCB: A Benchmark for Capturing Hand Grasping of Objects
Yu-Wei Chao, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Tremblay, Yashraj S. Narang, Karl Van Wyk, Umar Iqbal, Stan Birchfield, Jan Kautz, Dieter Fox
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021
[ paper ] [ supplementary ] [ video ] [ arXiv ] [ studio CAD model ] [ studio hardware ] [ RealSense calibration & recording guide ] [ project site ]

Citing DexYCB Toolkit

Please cite DexYCB Toolkit if it helps your research:

@INPROCEEDINGS{chao:cvpr2021,
  author    = {Yu-Wei Chao and Wei Yang and Yu Xiang and Pavlo Molchanov and Ankur Handa and Jonathan Tremblay and Yashraj S. Narang and Karl {Van Wyk} and Umar Iqbal and Stan Birchfield and Jan Kautz and Dieter Fox},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  title     = {{DexYCB}: A Benchmark for Capturing Hand Grasping of Objects},
  year      = {2021},
}

License

DexYCB Toolkit is released under the GNU General Public License v3.0.

Contents

  1. Prerequisites
  2. Installation
  3. Loading Dataset and Visualizing Samples
  4. Evaluation
    1. COCO Evaluation
    2. BOP Evaluation
    3. HPE Evaluation
    4. Grasp Evaluation
  5. Reproducing CVPR 2021 Results
  6. Visualizing Sequences
    1. Interactive 3D viewer
    2. Offline Renderer

Prerequisites

This code is tested with Python 3.7 on Linux.

Installation

For good practice for Python package management, it is recommended to use virtual environments (e.g., virtualenv or conda) to ensure packages from different projects do not interfere with each other.

  1. Clone the repo with --recursive and cd into it:

    git clone --recursive git@github.com:NVlabs/dex-ycb-toolkit.git
    cd dex-ycb-toolkit
    
  2. Install the dex-ycb-toolkit package and dependencies:

    # Install dex-ycb-toolkit
    pip install -e .
    
    # Install bop_toolkit dependencies
    cd bop_toolkit
    pip install -r requirements.txt
    cd ..
    
    # Install manopth
    cd manopth
    pip install -e .
    cd ..
    
  3. Download the DexYCB dataset from the project site.

  4. Set the environment variable for dataset path:

    export DEX_YCB_DIR=/path/to/dex-ycb
    

    $DEX_YCB_DIR should be a folder with the following structure:

    ├── 20200709-subject-01/
    ├── 20200813-subject-02/
    ├── ...
    ├── calibration/
    └── models/
    
  5. Download MANO models and code (mano_v1_2.zip) from the MANO website and place the file under manopath. Unzip the file and create symlink:

    cd manopth
    unzip mano_v1_2.zip
    cd mano
    ln -s ../mano_v1_2/models models
    cd ../..
    

Loading Dataset and Visualizing Samples

  1. The example below shows how to create a DexYCB dataset given setup (e.g., s0) and split name (e.g., train). Once created, you can use the dataset to fetch image samples.

    python examples/create_dataset.py
    
    <details> <summary>You should see the following output (click to expand):</summary>
    Dataset name: s0_train
    Dataset size: 465504
    1000th sample:
    {
        "color_file": "/datasets/dex-ycb-20201205/20200709-subject-01/20200709_141841/932122060861/color_000053.jpg",
        "depth_file": "/datasets/dex-ycb-20201205/20200709-subject-01/20200709_141841/932122060861/aligned_depth_to_color_000053.png",
        "label_file": "/datasets/dex-ycb-20201205/20200709-subject-01/20200709_141841/932122060861/labels_000053.npz",
        "intrinsics": {
            "fx": 613.0762329101562,
            "fy": 611.9989624023438,
            "ppx": 313.0279846191406,
            "ppy": 245.00865173339844
        },
        "ycb_ids": [
            1,
            11,
            12,
            20
        ],
        "ycb_grasp_ind": 0,
        "mano_side": "right",
        "mano_betas": [
            0.6993994116783142,
            -0.16909725964069366,
            -0.8955091834068298,
            -0.09764610230922699,
            0.07754238694906235,
            0.336286723613739,
            -0.05547792464494705,
            0.5248727798461914,
            -0.38668063282966614,
            -0.00133091164752841
        ]
    }
    .
    .
    .
    
    </details>

    Each sample includes the paths to the color and depth image, path to the label file, camera intrinsics, presented YCB objects' ID, index of the object being grasped, whether right or left hand, and the hand's MANO shape parameter.

    Each label file contains the following annotations packed in a dictionary:

    • seg: A unit8 numpy array of shape [H, W] containing the segmentation map. The label of each pixel can be 0 (background), 1-21 (YCB object), or 255 (hand).
    • pose_y: A float32 numpy array of shape [num_obj, 3, 4] holding the 6D pose of each object. Each 6D pose is represented by [R; t], where R is the 3x3 rotation matrix and t is the 3x1 translation.
    • pose_m: A float32 numpy array of shape [1, 51] holding the pose of the hand. pose_m[:, 0:48] stores the MANO pose coefficients in PCA representation, and pose_m[0, 48:51] stores the translation. If the image does not have a visible hand or the annotation does not exist, pose_m will be all 0.
    • joint_3d: A float32 numpy array of shape [1, 21, 3] holding the 3D joint position of the hand in the camera coordinates. The joint order is specified here. If the image does not have a visible hand or the annotation does not exist, joint_3d will be all -1.
    • joint_2d: A float32 numpy array of shape [1, 21, 2] holding the 2D joint position of the hand in the image space. The joint order follows joint_3d. If the image does not have a visible hand or the annotation does not exist, joint_2d will be all -1.
  2. The example below shows how to visualize ground-truth object and hand pose of one image sample.

    python examples/visualize_pose.py
    
    <img src="./docs/visualize_pose_1.jpg" height=300 width=400> <img src="./docs/visualize_pose_2.jpg" height=300 width=400>

Evaluation

DexYCB provides a benchmark to evaluate four tasks: (1) 2D object and keypoint detection (COCO), (2) 6D object pose estimation (BOP), (3) 3D hand pose estimation (HPE), and (4) safe human-to-robot object handover (Grasp).

Below we provide instructions and examples to run these evaluations. To run the examples, you need to first download the example results.

./results/fetch_example_results.sh

COCO Evaluation

BOP Evaluation

HPE Evaluation

Grasp Evaluation

Reproducing CVPR 2021 Results

We provide the result files of the benchmarks reported in the CVPR 2021 paper. Below we show how you can run evaluation on these files and reproduce the exact numbers in the paper.

To run the evaluation, you need to first download the CVPR 2021 results.

./results/fetch_cvpr2021_results.sh

The full set of evaluation scripts can be found in examples/all_cvpr2021_results_eval_scripts.sh. Below we show some examples.

Finally, you can reproduce the grasp precision-coverage curves for object handover on s1 with:

python examples/plot_grasp_curve.py
<img src="./docs/plot_grasp_curve_1.png" height=300>

This will save the precision-coverage curves on s1 to results/grasp_precision_coverage_s1_test.pdf.

The precision-coverage curves on setup s0, s2, and s3 can be generated with:

python examples/plot_grasp_curve.py --name s0_test
python examples/plot_grasp_curve.py --name s2_test
python examples/plot_grasp_curve.py --name s3_test

Visualizing Sequences

Besides visualizing the ground truths of one image sample, we also provide tools to visualize the captured hand and object motion of a full sequence. The tools include (1) an interactive 3D viewer and (2) an offline renderer.

Interactive 3D Viewer

Offline Renderer