Awesome
ARCTIC 🥶: A Dataset for Dexterous Bimanual Hand-Object Manipulation
👉I plan to enter the job market in Summer/Fall 2025. If you have an openning, feel free to email!👈
<p align="center"> <img src="docs/static/arctic-logo.svg" alt="Image" width="600" height="100" /> </p>[ Project Page ] [ Paper ] [ Video ] [ Register ARCTIC Account ] [ ECCV'24 Competition ] [ Leaderboard ]
<p align="center"> <img src="docs/static/teaser.jpeg" alt="Image" width="100%"/> </p>This is a repository for preprocessing, splitting, visualizing, and rendering (RGB, depth, segmentation masks) the ARCTIC dataset. Further, here, we provide code to reproduce our baseline models in our CVPR 2023 paper (Vancouver, British Columbia 🇨🇦) and developing custom models.
Our dataset contains heavily dexterous motion:
<p align="center"> <img src="./docs/static/dexterous.gif" alt="Image" width="100%"/> </p>News
✨CVPR 2024 Highlight: HOLD is the first method that jointly reconstructs articulated hands and objects from monocular videos without assuming a pre-scanned object template and 3D hand-object training data. See our project page for details.
<p align="center"> <img src="./docs/static/hold/mug_ours.gif" alt="HOLD Reconstruction Example" width="300"/> <!-- Adjust width as needed --> </p> <p align="center"> <img src="./docs/static/hold/mug_ref.png" alt="Reference for HOLD Reconstruction" width="300"/> <!-- Adjust width as needed --> </p>
- 2024.07.07: We host HANDS workshop at ECCV'24 to reconstruct hands and objects in ARCTIC without template. Join us here
- 2023.12.20: MoCap can be downloaded now! See download instructions and visualization.
- 2023.09.11: ARCTIC leaderboard online!
- 2023.06.16: ICCV ARCTIC challenge starts!
- 2023.05.04: ARCTIC dataset with code for dataloaders, visualizers, models is officially announced (version 1.0)!
- 2023.03.25: ARCTIC ☃️ dataset (version 0.1) is available! 🎉
Invited talks/posters at CVPR2023:
- 4D-HOI workshop: Keynote
- Ego4D + EPIC workshop: Oral presentation
- Rhobin workshop: Poster
- 3D scene understanding: Oral presentation
Why use ARCTIC?
Summary on dataset:
- It contains 2.1M high-resolution images paired with annotated frames, enabling large-scale machine learning.
- Images are from 8x 3rd-person views and 1x egocentric view (for mixed-reality setting).
- It includes 3D groundtruth for SMPL-X, MANO, articulated objects.
- It is captured in a MoCap setup using 54 high-end Vicon cameras.
- It features highly dexterous bimanual manipulation motion (beyond quasi-static grasping).
Potential tasks with ARCTIC:
- Template-free bimanual hand-object reconstruction
- Generating hand grasp or motion with articulated objects
- Generating full-body grasp or motion with articulated objects
- Benchmarking performance of articulated object pose estimators from depth images with human in the scene
- Studying our NEW tasks of consistent motion reconstruction and interaction field estimation
- Studying egocentric hand-object reconstruction
- Reconstructing full-body with hands and articulated objects from RGB images
Check out our project page for more details.
Third-party ARCTIC resources
- URDFs for ARCTIC objects
- Text description for ARCTIC motions
- Stable grasp labels on ARCTIC motions
Projects that use ARCTIC
Reconstruction:
- Get a Grip: Reconstructing Hand-Object Stable Grasps in Egocentric Videos
- 3D Hand Pose Estimation in Egocentric Images in the Wild
- Mitigating Perspective Distortion-induced Shape Ambiguity in Image Crops
- SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation
- Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with Objects
Generation:
- ArtiGrasp: Physically Plausible Synthesis of Bi-Manual Dexterous Grasping and Articulation
- GeneOH Diffusion: Towards Generalizable Hand-Object Interaction Denoising via Denoising Diffusion
- Text2HOI: Text-guided 3D Motion Generation for Hand-Object Interaction
- QuasiSim: Parameterized Quasi-Physical Simulators for Dexterous Manipulations Transfer
- InterHandGen: Two-Hand Interaction Generation via Cascaded Reverse Diffusion
Create a pull request for missing projects.
Features
<p align="center"> <img src="./docs/static/viewer_demo.gif" alt="Image" width="80%"/> </p>- Instructions to download the ARCTIC dataset.
- Scripts to process our dataset and to build data splits.
- Rendering scripts to render our 3D data into RGB, depth, and segmentation masks.
- A viewer to interact with our dataset.
- Instructions to setup data, code, and environment to train our baselines.
- A generalized codebase to train, visualize and evaluate the results of ArcticNet and InterField for the ARCTIC benchmark.
- A viewer to interact with the prediction.
Getting started
Get a copy of the code:
git clone https://github.com/zc-alexfan/arctic.git
- Setup environment: see
docs/setup.md
- Download and visualize ARCTIC dataset: see
docs/data/README.md
- Training, evaluating for our ARCTIC baselines: see
docs/model/README.md
. - Evaluation on test set: see
docs/leaderboard.md
- FAQ: see
docs/faq.md
License
See LICENSE.
Citation
@inproceedings{fan2023arctic,
title = {{ARCTIC}: A Dataset for Dexterous Bimanual Hand-Object Manipulation},
author = {Fan, Zicong and Taheri, Omid and Tzionas, Dimitrios and Kocabas, Muhammed and Kaufmann, Manuel and Black, Michael J. and Hilliges, Otmar},
booktitle = {Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023}
}
Our paper benefits a lot from aitviewer. If you find our viewer useful, to appreciate their hard work, consider citing:
@software{kaufmann_vechev_aitviewer_2022,
author = {Kaufmann, Manuel and Vechev, Velko and Mylonopoulos, Dario},
doi = {10.5281/zenodo.1234},
month = {7},
title = {{aitviewer}},
url = {https://github.com/eth-ait/aitviewer},
year = {2022}
}
Acknowledgments
Constructing the ARCTIC dataset is a huge effort. The authors deeply thank: Tsvetelina Alexiadis (TA) for trial coordination; Markus Höschle (MH), Senya Polikovsky, Matvey Safroshkin, Tobias Bauch (TB) for the capture setup; MH, TA and Galina Henz for data capture; Priyanka Patel for alignment; Giorgio Becherini and Nima Ghorbani for MoSh++; Leyre Sánchez Vinuela, Andres Camilo Mendoza Patino, Mustafa Alperen Ekinci for data cleaning; TB for Vicon support; MH and Jakob Reinhardt for object scanning; Taylor McConnell for Vicon support, and data cleaning coordination; Benjamin Pellkofer for IT/web support; Neelay Shah, Jean-Claude Passy, Valkyrie Felso for evaluation server. We also thank Adrian Spurr and Xu Chen for insightful discussion. OT and DT were supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B".
Contact
For technical questions, please create an issue. For other questions, please contact arctic@tue.mpg.de
.
For commercial licensing, please contact ps-licensing@tue.mpg.de
.