Home

Awesome

DenseFusion

<p align="center"> <img src ="assets/pullfig.png" width="1000" /> </p>

News

We have released the code and arXiv preprint for our new project 6-PACK which is based on this work and used for category-level 6D pose tracking.

Table of Content

Overview

This repository is the implementation code of the paper "DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion"(arXiv, Project, Video) by Wang et al. at Stanford Vision and Learning Lab and Stanford People, AI & Robots Group. The model takes an RGB-D image as input and predicts the 6D pose of the each object in the frame. This network is implemented using PyTorch and the rest of the framework is in Python. Since this project focuses on the 6D pose estimation process, we do not specifically limit the choice of the segmentation models. You can choose your preferred semantic-segmentation/instance-segmentation methods according to your needs. In this repo, we provide our full implementation code of the DenseFusion model, Iterative Refinement model and a vanilla SegNet semantic-segmentation model used in our real-robot grasping experiment. The ROS code of the real robot grasping experiment is not included.

Requirements

Code Structure

Datasets

This work is tested on two 6D object pose estimation datasets:

Download YCB_Video Dataset, preprocessed LineMOD dataset and the trained checkpoints (You can modify this script according to your needs.):

./download.sh

Training

./experiments/scripts/train_ycb.sh
./experiments/scripts/train_linemod.sh

Training Process: The training process contains two components: (i) Training of the DenseFusion model. (ii) Training of the Iterative Refinement model. In this code, a DenseFusion model will be trained first. When the average testing distance result (ADD for non-symmetry objects, ADD-S for symmetry objects) is smaller than a certain margin, the training of the Iterative Refinement model will start automatically and the DenseFusion model will then be fixed. You can change this margin to have better DenseFusion result without refinement but it's inferior than the final result after the iterative refinement.

Checkpoints and Resuming: After the training of each 1000 batches, a pose_model_current.pth / pose_refine_model_current.pth checkpoint will be saved. You can use it to resume the training. After each testing epoch, if the average distance result is the best so far, a pose_model_(epoch)_(best_score).pth / pose_model_refiner_(epoch)_(best_score).pth checkpoint will be saved. You can use it for the evaluation.

Notice: The training of the iterative refinement model takes some time. Please be patient and the improvement will come after about 30 epoches.

cd vanilla_segmentation/
python train.py --dataset_root=./datasets/ycb/YCB_Video_Dataset

To make the best use of the training set, several data augementation techniques are used in this code:

(1) A random noise is added to the brightness, contrast and saturation of the input RGB image with the torchvision.transforms.ColorJitter function, where we set the function as torchvision.transforms.ColorJitter(0.2, 0.2, 0.2, 0.05).

(2) A random pose translation noise is added to the training set of the pose estimator, where we set the range of the translation noise to 3cm for both datasets.

(3) For the YCB_Video dataset, since the synthetic data is not contain background. We randomly select the real training data as the background. In each frame, we also randomly select two instances segmentation clips from another synthetic training image to mask at the front of the input RGB-D image, so that more occlusion situations can be generated.

Evaluation

Evaluation on YCB_Video Dataset

For fair comparison, we use the same segmentation results of PoseCNN and compare with their results after ICP refinement. Please run:

./experiments/scripts/eval_ycb.sh

This script will first download the YCB_Video_toolbox to the root folder of this repo and test the selected DenseFusion and Iterative Refinement models on the 2949 keyframes of the 10 testing video in YCB_Video Dataset with the same segmentation result of PoseCNN. The result without refinement is stored in experiments/eval_result/ycb/Densefusion_wo_refine_result and the refined result is in experiments/eval_result/ycb/Densefusion_iterative_result.

After that, you can add the path of experiments/eval_result/ycb/Densefusion_wo_refine_result/ and experiments/eval_result/ycb/Densefusion_iterative_result/ to the code YCB_Video_toolbox/evaluate_poses_keyframe.m and run it with MATLAB. The code YCB_Video_toolbox/plot_accuracy_keyframe.m can show you the comparsion plot result. You can easily make it by copying the adapted codes from the replace_ycb_toolbox/ folder and replace them in the YCB_Video_toolbox/ folder. But you might still need to change the path of your YCB_Video Dataset/ in the globals.m and copy two result folders(Densefusion_wo_refine_result/ and Densefusion_iterative_result/) to the YCB_Video_toolbox/ folder.

Evaluation on LineMOD Dataset

Just run:

./experiments/scripts/eval_linemod.sh

This script will test the models on the testing set of the LineMOD dataset with the masks outputted by the trained vanilla SegNet model. The result will be printed at the end of the execution and saved as a log in experiments/eval_result/linemod/.

Results

Quantitative evaluation result with ADD-S metric compared to other RGB-D methods. Ours(per-pixel) is the result of the DenseFusion model without refinement and Ours(iterative) is the result with iterative refinement.

<p align="center"> <img src ="assets/result_ycb.png" width="600" /> </p>

Important! Before you use these numbers to compare with your methods, please make sure one important issus: One difficulty for testing on the YCB_Video Dataset is how to let the network to tell the difference between the object 051_large_clamp and 052_extra_large_clamp. The result of all the approaches in this table uses the same segmentation masks released by PoseCNN without any detection priors, so all of them suffer a performance drop on these two objects because of the poor detection result and this drop is also added to the final overall score. If you have added detection priors to your detector to distinguish these two objects, please clarify or do not copy the overall score for comparsion experiments.

Quantitative evaluation result with ADD metric for non-symmetry objects and ADD-S for symmetry objects(eggbox, glue) compared to other RGB-D methods. High performance RGB methods are also listed for reference.

<p align="center"> <img src ="assets/result_linemod.png" width="500" /> </p>

The qualitative result on the YCB_Video dataset.

<p align="center"> <img src ="assets/compare.png" width="600" /> </p>

Trained Checkpoints

You can download the trained DenseFusion and Iterative Refinement checkpoints of both datasets from Link.

Tips for your own dataset

As you can see in this repo, the network code and the hyperparameters (lr and w) remain the same for both datasets. Which means you might not need to adjust too much on the network structure and hyperparameters when you use this repo on your own dataset. Please make sure that the distance metric in your dataset should be converted to meter, otherwise the hyperparameter w need to be adjusted. Several useful tools including LabelFusion and sixd_toolkit has been tested to work well. (Please make sure to turn on the depth image collection in LabelFusion when you use it.)

Citations

Please cite DenseFusion if you use this repository in your publications:

@article{wang2019densefusion,
  title={DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion},
  author={Wang, Chen and Xu, Danfei and Zhu, Yuke and Mart{\'\i}n-Mart{\'\i}n, Roberto and Lu, Cewu and Fei-Fei, Li and Savarese, Silvio},
  booktitle={Computer Vision and Pattern Recognition (CVPR)},
  year={2019}
}

License

Licensed under the MIT License