Awesome
SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation
This is the PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation by Kai Chen and Qi Dou.
<p align="center"> <img src="images/teaser.png" alt="intro" width="100%"/> </p>Abstract
Category-level 6D object pose estimation aims to predict the position and orientation for unseen objects, which plays a pillar role in many scenarios such as robotics and augmented reality. The significant intra-class variation is the bottleneck challenge in this task yet remains unsolved so far. In this paper, we take advantage of category prior to overcome this problem by innovating a structure-guided prior adaptation scheme to accurately estimate 6D pose for individual objects. Different from existing prior based methods, given one object and its corresponding category prior, we propose to leverage their structure similarity to dynamically adapt the prior to the observed object. The prior adaptation intrinsically associates the adopted prior with different objects, from which we can accurately reconstruct the 3D canonical model of the specific object for pose estimation. To further enhance the structure characteristic of objects, we extract low-rank structure points from the dense object point cloud, therefore more efficiently incorporating sparse structural information during prior adaptation. Extensive experiments on CAMERA25 and REAL275 benchmarks demonstrate significant performance improvement.
Requirements
- Linux (tested on Ubuntu 18.04)
- Python 3.6+
- CUDA 10.0
- PyTorch 1.1.0
Installation
Conda virtual environment
We recommend using conda to setup the environment.
If you have already installed conda, please use the following commands.
conda create -n sgpa python=3.6
conda activate sgpa
pip install -r requirements.txt
Build PointNet++
cd SGPA/pointnet2/pointnet2
python setup.py install
Build nn_distance
cd SGPA/lib/nn_distance
python setup.py install
Dataset
Download camera_train, camera_val, real_train, real_test, ground-truth annotations and mesh models provided by NOCS.
Then, organize and preprocess these files following SPD. For a quick evaluation, we provide the processed testing data for REAL275. You can download it here and organize the testing data as follows:
SGPA
├── data
│ └── Real
│ ├──test
│ └──test_list.txt
└── results
└── mrcnn_results
└──real_test
Evaluation
Please download our trained model here and put it in the 'SGPA/model' directory. Then, you can have a quick evaluation on the REAL275 dataset using the following command. In addition, please download our trained model here for the CAMERA dataset.
bash eval.sh
Train
In order to train the model, remember to download the complete dataset, organize and preprocess the dataset properly at first.
train.py is the main file for training. You can simply start training using the following command.
bash train.sh
Citation
If you find the code useful, please cite our paper.
@inproceedings{chen2021sgpa,
title={Sgpa: Structure-guided prior adaptation for category-level 6d object pose estimation},
author={Chen, Kai and Dou, Qi},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2773--2782},
year={2021}
}
Any questions, please feel free to contact Kai Chen (kaichen@cse.cuhk.edu.hk).
Acknowledgment
The dataset is provided by NOCS. Our code is developed based on SPD and Pointnet2.PyTorch.