Awesome
PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking
Citing PoseRBPF
If you find the PoseRBPF code useful, please consider citing:
@inproceedings{deng2019pose,
author = {Xinke Deng and Arsalan Mousavian and Yu Xiang and Fei Xia and Timothy Bretl and Dieter Fox},
title = {PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking},
booktitle = {Robotics: Science and Systems (RSS)},
year = {2019}
}
@inproceedings{deng2020self,
author = {Xinke Deng and Yu Xiang and Arsalan Mousavian and Clemens Eppner and Timothy Bretl and Dieter Fox},
title = {Self-supervised 6D Object Pose Estimation for Robot Manipulation},
booktitle = {International Conference on Robotics and Automation (ICRA)},
year = {2020}
}
Installation
git clone https://github.com/NVlabs/PoseRBPF.git --recursive
Install dependencies:
- install anaconda according to the official website.
- create the virtual env with
pose_rbpf_env.yml
:
conda env create -f pose_rbpf_env.yml
conda activate pose_rbpf_env
- compile the YCB Renderer according to the instruction.
- compile the utility functions with:
sh build.sh
Download
- CAD Model for YCB Objects: 743MB
- CAD Model for T-LESS Objects: 502MB
- PoseCNN weights for 20 YCB Objects: 76MB
- RGB Auto-encoder weights (YCB): 6GB
- RGB-D Auto-encoder weights (YCB): 9GB
- Self-supervided trained RGB Auto-encoder weights (YCB): 5GB
- RGB Auto-encoder weights (T-LESS): 8GB
- RGB-D Auto-encoder weights (T-LESS): 12GB
Downolad files as needed. Extract CAD models under the cad_models
directory, and extract model weights under the checkpoints
directory.
A quick demo on the YCB Video Dataset
- The demo shows tracking
003_cracker_box
on YCB Video Dataset. - Run script
download_demo.sh
to download checkpoint (434 MB), CAD models (743 MB), 2D detections (13 MB), and necessary data (3 GB) for the demo:
./scripts/download_demo.sh
- Then you should have files organized like:
├── ...
├── PoseRBPF
| |── cad_models
| | |── ycb_models
| | └── ...
| |── checkpoints
| | |── ycb_ckpts_roi_rgbd
| | |── ycb_codebooks_roi_rgbd
| | |── ycb_configs_roi_rgbd
| | └── ...
| |── detections
| | |── posecnn_detections
| | |── tless_retina_detections
| |── config # configuration files for training and DPF
| |── networks # auto-encoder networks
| |── pose_rbpf # particle filters
| └── ...
|── YCB_Video_Dataset # to store ycb data
| |── cameras
| |── data
| |── image_sets
| |── keyframes
| |── poses
| └── ...
└── ...
- Run demo with
003_cracker_box
. The results will be stored in./results/
./scripts/run_demo.sh
Online Real-world Pose Estimation using ROS
- Due to the incompatibility between ROS Kinetic and Python 3, the ROS node only runs with Python 2.7. We first create the virtual env with
pose_rbpf_env_py2.yml
:
conda env create -f pose_rbpf_env_py2.yml
conda activate pose_rbpf_env_py2
- compile the YCB Renderer according to the instruction.
- compile the utility functions with:
sh build.sh
- Make sure you can run the demo above first.
- Install ROS if it's not there:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
sudo apt-get update
sudo apt-get install ros-kinetic-desktop-full
- Update python packages:
conda install -c auto catkin_pkg
pip install -U rosdep rosinstall_generator wstool rosinstall six vcstools
pip install msgpack
pip install empy
- Source ROS (every time before launching the node):
source /opt/ros/kinetic/setup.bash
- Initialze rosdep:
sudo rosdep init
rosdep update
Single object tracking demo:
- Download demo rosbag:
./scripts/download_ros_demo.sh
- Run PoseCNN node (with roscore running in another terminal, download PoseCNN weights first):
./scripts/run_ros_demo_posecnn.sh
- Run PoseRBPF node for RGB-D tracking (with roscore running in another terminal):
./scripts/run_ros_demo.sh
- (Optional) For RGB tracking run this command instead:
./scripts/run_ros_demo_rgb.sh
- Run RVIZ in the PoseRBPF directory:
rosrun rviz rviz -d ./ros/tracking.rviz
- Once you see
*** PoseRBPF Ready ...
in the PoseRBPF terminal, run rosbag in another terminal, then you should be able to see the tracking demo:
rosbag play ./ros_data/demo_single.bag
Multiple object tracking demo:
- Download demo rosbag:
./scripts/download_ros_demo_multiple.sh
- Run PoseCNN node (with roscore running in another terminal, download PoseCNN weights first):
./scripts/run_ros_demo_posecnn.sh
- Run PoseRBPF node with self-supervised trained RGB Auto-encoder weights:
./scripts/run_ros_demo_rgb_multiple_ssv.sh
- (Optional) Run PoseRBPF node with RGB-D Auto-encoder weights instead:
./scripts/run_ros_demo_multiple.sh
- (Optional) Run PoseRBPF node with RGB Auto-encoder weights instead:
./scripts/run_ros_demo_rgb_multiple.sh
- Run RVIZ in the PoseRBPF directory:
rosrun rviz rviz -d ./ros/tracking.rviz
- Once you see
*** PoseRBPF Ready ...
in the PoseRBPF terminal, run rosbag in another terminal, then you should be able to see the tracking demo:
rosbag play ./ros_data/demo_multiple.bag
Note that PoseRBPF takes certain time to initialize each object before tracking. You can pause the ROS bag by pressing space for initialization, and then press space again to resume tracking.
Testing on the YCB Video Dataset
- Download checkpoints from the google drive folder (
ycb_rgbd_full.tar.gz
orycb_rgb_full.tar.gz
) and unzip to the checkpoint directory. - Download all the data in the YCB Video Dataset so the
../YCB_Video_Dataset/data
folder contains all the sequences. - Run RGB-D tracking (use
002_master_chef_can
as an example here):
sh scripts/test_ycb_rgbd/val_ycb_002_rgbd.sh 0 1
- Run RGB tracking (use
002_master_chef_can
as an example here):
sh scripts/test_ycb_rgb/val_ycb_002_rgb.sh 0 1
Testing on the T-LESS Dataset
- Download checkpoints from the google drive folder (
tless_rgbd_full.tar.gz
ortless_rgb_full.tar.gz
) and unzip to the checkpoint directory. - Download all the data in the T-LESS Dataset so the
../TLess/
folder contains all the sequences. - Download all the models for T-LESS objects from the google drive folder.
- Then you should have files organized like:
├── ...
├── PoseRBPF
| |── cad_models
| | |── ycb_models
| | |── tless_models
| | └── ...
| |── checkpoints
| | |── tless_ckpts_roi_rgbd
| | |── tless_codebooks_roi_rgbd
| | |── tless_configs_roi_rgbd
| | └── ...
| |── detections
| | |── posecnn_detections
| | |── tless_retina_detections
| |── config # configuration files for training and DPF
| |── networks # auto-encoder networks
| |── pose_rbpf # particle filters
| └── ...
|── YCB_Video_Dataset # to store ycb data
| |── cameras
| |── data
| |── image_sets
| |── keyframes
| |── poses
| └── ...
|── TLess # to store tless data
| |── t-less_v2
|── tless_ckpts_roi_rgbd
| | |── test_primesense
| | └── ...
| └── ...
└── ...
- Run RGB-D tracking (use
obj_01
as an example here):
sh scripts/test_tless_rgbd/val_tless_01_rgbd.sh 0 1
- Run RGB tracking (use
obj_01
as an example here):
sh scripts/test_tless_rgb/val_tless_01_rgb.sh 0 1
Testing on the DexYCB Dataset
-
Download checkpoints from the google drive folder (
ycb_rgbd_full.tar.gz
orycb_rgb_full.tar.gz
) and unzip to the checkpoint directory. -
Download the DexYCB dataset from here.
-
Download PoseCNN results on the DexYCB dataset from here.
-
Create a symlink for the DexYCB dataset and the PoseCNN results
cd $ROOT/data/DEX_YCB ln -s $dex_ycb_data data ln -s $results_posecnn_data results_posecnn
-
Install PyTorch PoseCNN layers according to the instructions here.
-
Run RGB-D tracking:
./scripts/test_dex_rgbd/dex_ycb_test_rgbd_s0.sh $GPU_ID
-
Run RGB tracking:
./scripts/test_dex_rgb/dex_ycb_test_rgb_s0.sh $GPU_ID
Training
- Download microsoft coco dataset 2017 val images from here for data augmentation.
- Store the folder
val2017
in../coco/
- Run training example for
002_master_chef_can
in the YCB objects. The training should be able to run on one single NVIDIA TITAN Xp GPU:
sh scripts/train_ycb_rgbd/train_script_ycb_002.sh
Acknowledgements
We have referred to part of the RoI align code from maskrcnn-benchmark.
License
PoseRBPF is licensed under the NVIDIA Source Code License - Non-commercial.