Home

Awesome

DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects

[Project Page] [arXiv] [Paper]

DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects,

Chen Bao*, Helin Xu*, Yuzhe Qin, Xiaolong Wang, CVPR 2023.

DexArt is a novel benchmark and pipeline for learning multiple dexterous manipulation tasks. This repo contains the simulated environment and training code for DexArt.

DexArt Teaser

News

[2023.11.21] All the RL checkpoints are available now!🎈 They are included in the assets. See Main Results to reproduce the results in the paper! <br> [2023.4.18] Code and vision pre-trained models are available now! <br> [2023.3.24] DexArt is accepted by CVPR 2023! 🎉 <br>

Installation

  1. Clone the repo and Create a conda env with all the Python dependencies.
git clone git@github.com:Kami-code/dexart-release.git
cd dexart-release
conda create --name dexart python=3.8
conda activate dexart
pip install -e .    # for simulation environment
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 -c pytorch    # for visualizing trained policy and training 
  1. Download the assets from the Google Drive and place the asset directory at the project root directory.

File Structure

The file structure is listed as follows:

dexart/env/: environments

assets/: tasks annotations, object, robot URDFs and RL checkpoints

examples/: example code to try DexArt

stable_baselines3/: RL training code modified from stable_baselines3

Quick Start

Example of Random Action

python examples/random_action.py --task_name=laptop

task_name: name of the environment [faucet, laptop, bucket, toilet]

Example for Visualizing Point Cloud Observation

python examples/visualize_observation.py --task_name=laptop

task_name: name of the environment [faucet, laptop, bucket, toilet]

Example for Visualizing Policy

python examples/visualize_policy.py --task_name=laptop --checkpoint_path assets/rl_checkpoints/laptop/laptop_nopretrain_0.zip

task_name: name of the environment [faucet, laptop, bucket, toilet]

use_test_set: flag to determine evaluating with seen or unseen instances

Example for Training RL Agent

python3 examples/train.py --n 100 --workers 10 --iter 5000 --lr 0.0001 &&
--seed 100 --bs 500 --task_name laptop --extractor_name smallpn &&
--pretrain_path ./assets/vision_pretrain/laptop_smallpn_fulldata.pth 

n: the number of rollouts to be collected in a single episode

workers: the number of simulation progress

iter: the total episode number to be trained

lr: learning rate of RL

seed: seed of RL

bs: batch size of RL update

task_name: name of training environment [faucet, laptop, bucket, toilet]

extractor_name: different PointNet architectures [smallpn, meduimpn, largepn]

pretrain_path: path to downloaded pre-trained model. [Default: None]

save_freq: save the model every save_freq episodes. [Default: 1]

save_path: path to save the model. [Default: ./examples]

Main Results

python examples/evaluate_policy.py --task_name=laptop --checkpoint_path assets/rl_checkpoints/laptop/laptop_nopretrain_0.zip --eval_per_instance 100
python examples/evaluate_policy.py --task_name=laptop --use_test_set --checkpoint_path assets/rl_checkpoints/laptop/laptop_nopretrain_0.zip --eval_per_instance 100

task_name: name of the environment [faucet, laptop, bucket, toilet]

use_test_set: flag to determine evaluating with seen or unseen instances

Faucet

MethodSplitSeed 0Seed 1Seed 2AvgStd
No Pre-traintrain/test0.52/0.340.00/0.000.44/0.430.32/0.260.23/0.18
Segmentation on PMMtrain/test0.42/0.380.25/0.150.14/0.110.27/0.210.11/0.12
Classification on PMMtrain/test0.40/0.330.19/0.140.07/0.090.22/0.180.14/0.10
Reconstruction on DAMtrain/test0.27/0.170.37/0.300.36/0.210.33/0.220.05/0.05
SimSiam on DAMtrain/test0.80/0.600.40/0.240.72/0.530.64/0.460.17/0.16
Segmentation on DAMtrain/test0.80/0.560.76/0.530.82/0.660.79/0.590.02/0.05

Laptop

MethodSplitSeed 0Seed 1Seed 2AvgStd
No Pre-traintrain/test0.78/0.410.78/0.310.81/0.500.79/0.410.02/0.08
Segmentation on PMMtrain/test0.91/0.620.90/0.530.77/0.480.86/0.540.06/0.08
Classification on PMMtrain/test0.96/0.510.58/0.350.96/0.620.83/0.490.18/0.11
Reconstruction on DAMtrain/test0.85/0.560.91/0.630.80/0.430.85/0.540.05/0.08
SimSiam on DAMtrain/test0.84/0.590.83/0.340.89/0.510.86/0.480.03/0.10
Segmentation on DAMtrain/test0.89/0.570.94/0.670.89/0.580.91/0.600.02/0.04

Bucket

MethodSplitSeed 0Seed 1Seed 2AvgStd
No Pre-traintrain/test0.36/0.550.58/0.690.52/0.490.49/0.570.09/0.08
Segmentation on PMMtrain/test0.62/0.620.00/0.000.40/0.410.34/0.340.26/0.26
Classification on PMMtrain/test0.55/0.470.50/0.510.67/0.730.57/0.570.07/0.11
Reconstruction on DAMtrain/test0.49/0.490.58/0.460.40/0.590.49/0.510.07/0.05
SimSiam on DAMtrain/test0.00/0.000.53/0.380.73/0.780.42/0.390.30/0.32
Segmentation on DAMtrain/test0.70/0.680.70/0.740.79/0.850.73/0.750.04/0.07

Toilet

MethodSplitSeed 0Seed 1Seed 2AvgStd
No Pre-traintrain/test0.80/0.470.75/0.510.63/0.430.72/0.470.07/0.03
Segmentation on PMMtrain/test0.78/0.420.62/0.460.64/0.470.68/0.450.07/0.02
Classification on PMMtrain/test0.78/0.330.65/0.430.66/0.440.69/0.400.06/0.05
Reconstruction on DAMtrain/test0.78/0.580.73/0.480.75/0.490.75/0.520.02/0.05
SimSiam on DAMtrain/test0.84/0.540.81/0.490.84/0.450.83/0.500.01/0.04
Segmentation on DAMtrain/test0.86/0.540.84/0.530.86/0.560.85/0.540.01/0.01

Visual Pretraining

We have uploaded the code to generate a dataset and pretrain our models in examples/pretrain. You can refer to examples/pretrain/run.sh for a detailed usage.

Bibtex

@inproceedings{bao2023dexart,
  title={DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects},
  author={Bao, Chen and Xu, Helin and Qin, Yuzhe and Wang, Xiaolong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={21190--21200},
  year={2023}
}

Acknowledgements

This repository employs the same code structure for simulation environment and training code to that used in DexPoint.