Awesome
GA-DDPG
Installation
git clone https://github.com/liruiw/GA-DDPG.git --recursive
-
Setup: Ubuntu 16.04 or above, CUDA 10.0 or above, python 2.7 / 3.6
-
- (Required for Training) - Install OMG submodule and reuse conda environment.
- (Docker) See OMG Docker for details.
- (Demo) - Install GA-DDPG inside a new conda environment
conda create --name gaddpg python=3.6.9 conda activate gaddpg pip install -r requirements.txt
-
Install PointNet++
-
Download environment data
bash experiments/scripts/download_data.sh
Pretrained Model Demo
- Download pretrained models
bash experiments/scripts/download_model.sh
- Demo model test
bash experiments/scripts/test_demo.sh
Example 1 | Example 2 |
---|---|
<img src="assets/demo.gif" width="224" height="224"/> | <img src="assets/demo3.gif" width="224" height="224"/> |
Save Data and Offline Training
- Download example offline data
bash experiments/scripts/download_offline_data.sh
The .npz dataset (saved replay buffer) can be found indata/offline_data
and can be loaded for training (there are several deprecated attributes). The image version of the offline buffer can be found here. - To save extra gpus for online rollouts, use the offline training script
bash ./experiments/scripts/train_offline.sh bc_aux_dagger.yaml BC
- Saving dataset
bash ./experiments/scripts/train_online_save_buffer.sh bc_save_data.yaml BC
.
Online Training and Testing
- We use ray for parallel rollout and training. The training scripts might require adjustment according to the local machine. See
config.py
for some notes. - Training online
bash ./experiments/scripts/train_online_visdom.sh td3_critic_aux_policy_aux.yaml DDPG
. Use visdom and tensorboard to monitor. - Testing on YCB objects
bash ./experiments/scripts/test_ycb.sh demo_model
. Replace demo_model with trained models. Logs and videos would be saved tooutput_misc
Note
- Checkout
core/test_realworld_ros_final.py
for an example of real-world usages. - Related Works (OMG, ACRONYM, 6DGraspNet, 6DGraspNet-Pytorch, ContactGraspNet, Unseen-Clustering)
- To use the full Acronym dataset with Shapenet meshes, please follow ACRONYM to download the meshes and grasps and follow OMG-Planner to process and save in
/data
.filter_shapenet.json
can then be used for training. - Please use Github issue tracker to report bugs. For other questions please contact Lirui Wang.
File Structure
├── ...
├── GADDPG
| |── data # training data
| | |── grasps # grasps from the ACRONYM dataset
| | |── objects # object meshes, sdf, urdf, etc
| | |── robots # robot meshes, urdf, etc
| | └── gaddpg_scenes # test scenes
| |── env # environment-related code
| | |── panda_scene # environment and task
| | └── panda_gripper_hand_camera # franka panda with gripper and camera
| |── OMG # expert planner submodule
| |── experiments # experiment scripts
| | |── config # hyperparameters for training, testing and environment
| | |── scripts # main running scripts
| | |── model_spec # network architecture spec
| | |── cfgs # experiment config and hyperparameters
| | └── object_index # object indexes
| |── core # agents and learning
| | |── train_online # online training
| | |── train_test_offline # testing and offline training
| | |── network # network architecture
| | |── test_realworld_ros_final # real-world script example
| | |── agent # main agent code
| | |── replay_memory # replay buffer
| | |── trainer # ray-related training setup
| | └── ...
| |── output # trained model
| |── output_misc # log and videos
| └── ...
└── ...
Citation
If you find GA-DDPG useful in your research, please consider citing:
@inproceedings{wang2021goal,
author = {Lirui Wang, Yu Xiang, Wei Yang, Arsalan Mousavian, and Dieter Fox},
title = {Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds},
booktitle = {The Conference on Robot Learning (CoRL)},
year = {2021}
}
License
The GA-DDPG is licensed under the MIT License.