Home

Awesome

Dexterous Imitation Made Easy

Authors: Sridhar Pandian Arunachalam*, Sneha Silwal*, Ben Evans, Lerrel Pinto

This is the official implementation of the paper Dexterous Manipulation Made Easy.

Policies on Real Robot

<p align="center"> <img width="30%" src="https://github.com/NYU-robot-learning/dime/blob/gh-pages/figs/block-8x-optimized.gif"> <img width="30%" src="https://github.com/NYU-robot-learning/dime/blob/gh-pages/figs/fidget-8x-optimzed.gif"> <img width="30%" src="https://github.com/NYU-robot-learning/dime/blob/gh-pages/figs/flip-2x-optimized.gif"> </p>

Method

DIME DIME consists of two phases: demonstration colleciton, which is performed in real-time with visual feedback, and demonstration-based policy learning, which can learn to solve dexterous tasks from a limited number of demonstrations.

Setup

The code base is split into 5 separate packages for convenience and this is one out of the five repositories. You can clone and setup each package by following the instructions on their respective repositories. The packages are:

You need to setup the Controller packages and IK-TeleOp package before using this package. To install the dependencies for this package with pip:

pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

Then install this package with:

pip3 install -e .

Data

All our data can be found in this URL: https://drive.google.com/drive/folders/1nunGHB2EK9xvlmepNNziDDbt-pH8OAhi

Citation

If you use this repo in your research, please consider citing the paper as follows:

@article{arunachalam2022dime,
  title={Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient Dexterous Manipulation},
  author={Sridhar Pandian Arunachalam and Sneha Silwal and Ben Evans and Lerrel Pinto},
  journal={arXiv preprint arXiv:2203.13251},
  year={2022}
}