Home

Awesome

CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation

License: MIT PWC<img src="demo/Pytorch_logo.png" width="10%">

This repository is the pytorch implementation of our paper: <a href="https://www.tri.global/" target="_blank"> <img align="right" src="demo/tri-logo.png" width="25%"/> </a>

CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation<br> Muhammad Zubair Irshad, Thomas Kollar, Michael Laskey, Kevin Stone, Zsolt Kira <br> International Conference on Robotics and Automation (ICRA), 2022<br>

[Project Page] [arXiv] [PDF] [Video] [Poster]

Explore CenterSnap in Colab<br>

Follow-up ECCV'22 work:

ShAPO: Implicit Representations for Multi-Object Shape, Appearance and Pose Optimization<br> Muhammad Zubair Irshad, Sergey Zakharov, Rares Ambrus, Thomas Kollar, Zsolt Kira, Adrien Gaidon <br> European Conference on Computer Vision (ECCV), 2022<br>

[Project Page] [arXiv] [PDF] [Video] [Poster]

<p align="center"> <img src="demo/Pose_CS.gif" width="100%"> </p> <p align="center"> <img src="demo/method.gif" width="100%"> </p>

Citation

If you find this repository useful, please consider citing:

@inproceedings{irshad2022centersnap,
     title = {CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation},
     author = {Muhammad Zubair Irshad and Thomas Kollar and Michael Laskey and Kevin Stone and Zsolt Kira},
     journal = {IEEE International Conference on Robotics and Automation (ICRA)},
     year = {2022}
     }


@inproceedings{irshad2022shapo,
     title = {ShAPO: Implicit Representations for Multi-Object Shape Appearance and Pose Optimization},
     author = {Muhammad Zubair Irshad and Sergey Zakharov and Rares Ambrus and Thomas Kollar and Zsolt Kira and Adrien Gaidon},
     journal = {European Conference on Computer Vision (ECCV)},
     year = {2022}
     }

Contents

šŸ’» Environment

Create a python 3.8 virtual environment and install requirements:

cd $CenterSnap_Repo
conda create -y --prefix ./env python=3.8
conda activate ./env/
./env/bin/python -m pip install --upgrade pip
./env/bin/python -m pip install -r requirements.txt 

Install torch==1.7.1 torchvision==0.8.2 based on your CUDA version. The code was built and tested on cuda 10.2. A sample command to install torch on cuda 10.2 is as follows:

pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2

šŸ“Š Dataset

New Update: Please checkout the distributed script of our new ECCV'22 work ShAPO if you'd like to collect your own data from scratch in a couple of hours. That distributed script collects the data in the same format as required by CenterSnap, although with a few minor modications as mentioned in that repo.

  1. Download pre-processed dataset

We recommend downloading the preprocessed dataset to train and evaluate CenterSnap model. Download and untar Synthetic (868GB) and Real (70GB) datasets. These files contains all the training and validation you need to replicate our results.

cd $CenterSnap_REPO/data
wget https://tri-robotics-public.s3.amazonaws.com/centersnap/CAMERA.tar.gz
tar -xzvf CAMERA.tar.gz

wget https://tri-robotics-public.s3.amazonaws.com/centersnap/Real.tar.gz
tar -xzvf Real.tar.gz

The data directory structure should follow:

data
ā”œā”€ā”€ CAMERA
ā”‚   ā”œā”€ā”€ train
ā”‚   ā””ā”€ā”€ val_subset
ā”œā”€ā”€ Real
ā”‚   ā”œā”€ā”€ train
ā””ā”€ā”€ ā””ā”€ā”€ test
  1. To prepare your own dataset, we provide additional scripts under prepare_data.

āœØ Training and Inference

  1. Train on NOCS Synthetic (requires 13GB GPU memory):
./runner.sh net_train.py @configs/net_config.txt

Note than runner.sh is equivalent to using python to run the script. Additionally it sets up the PYTHONPATH and CenterSnap Enviornment Path automatically.

  1. Finetune on NOCS Real Train (Note that good results can be obtained after finetuning on the Real train set for only a few epochs i.e. 1-5):
./runner.sh net_train.py @configs/net_config_real_resume.txt --checkpoint \path\to\best\checkpoint
  1. Inference on a NOCS Real Test Subset
<p align="center"> <img src="demo/reconstruction.gif" width="100%"> </p>

Download a small NOCS Real subset from [here]

./runner.sh inference/inference_real.py @configs/net_config.txt --data_dir path_to_nocs_test_subset --checkpoint checkpoint_path_here

You should see the visualizations saved in results/CenterSnap. Change the --ouput_path in *config.txt to save them to a different folder

  1. Optional (Shape Auto-Encoder Pre-training)

We provide pretrained model for shape auto-encoder to be used for data collection and inference. Although our codebase doesn't require separately training the shape auto-encoder, if you would like to do so, we provide additional scripts under external/shape_pretraining

šŸ“ FAQ

1. I am not getting good performance on my custom camera images i.e. Realsense, OAK-D or others.

2. How to generate good zero-shot results on HSR robot camera:

3. I am getting no cuda GPUs available while running colab.

Make sure that you have enabled the GPU under Runtime-> Change runtime type!

4. I am getting raise RuntimeError('received %d items of ancdata' % RuntimeError: received 0 items of ancdata

5. I am getting RuntimeError: CUDA error: no kernel image is available for execution on the device or You requested GPUs: [0] But your machine only has: []

  1. Installing cuda 10.2 and running the same script in requirements.txt

  2. Installing the relevant pytorch cuda version i.e. changing this line in the requirements.txt

torch==1.7.1
torchvision==0.8.2

6. I am seeing zero val metrics in wandb

Follow-up-works

Acknowledgments

Licenses