Awesome
ShAPO:tophat:: Implicit Representations for Multi-Object Shape, Appearance and Pose Optimization
<img src="demo/Pytorch_logo.png" width="10%">
This repository is the pytorch implementation of our paper: <a href="https://www.tri.global/" target="_blank"> <img align="right" src="demo/tri-logo.png" width="25%"/> </a>
ShAPO: Implicit Representations for Multi-Object Shape, Appearance and Pose Optimization<br> Muhammad Zubair Irshad, Sergey Zakharov, Rares Ambrus, Thomas Kollar, Zsolt Kira, Adrien Gaidon <br> European Conference on Computer Vision (ECCV), 2022<br>
[Project Page] [arXiv] [PDF] [Video] [Poster]
Previous ICRA'22 work:
CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation<br> Muhammad Zubair Irshad, Thomas Kollar, Michael Laskey, Kevin Stone, Zsolt Kira <br> International Conference on Robotics and Automation (ICRA), 2022<br>
[Project Page] [arXiv] [PDF] [Video] [Poster]
<p align="center"> <img src="demo/mesh_models.png" width="100%"> </p> <p align="center"> <img src="demo/architecture.jpg" width="100%"> </p>Citation
If you find this repository useful, please consider citing:
@inproceedings{irshad2022shapo,
title = {ShAPO: Implicit Representations for Multi-Object Shape Appearance and Pose Optimization},
author = {Muhammad Zubair Irshad and Sergey Zakharov and Rares Ambrus and Thomas Kollar and Zsolt Kira and Adrien Gaidon},
journal = {European Conference on Computer Vision (ECCV)},
year = {2022}
}
@inproceedings{irshad2022centersnap,
title = {CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation},
author = {Muhammad Zubair Irshad and Thomas Kollar and Michael Laskey and Kevin Stone and Zsolt Kira},
journal = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2022}
}
Contents
š¤ Google Colab
If you want to experiment with ShAPO, we have written a Colab. It's quite comprehensive and easy to setup. It goes through the following experiments / ShAPO properties:
- Single Shot inference
- Visualize peak and depth output
- Decode shape with predicted textures
- Project 3D Pointclouds and 3D bounding boxes on 2D image
- Shape, Appearance and Pose Optimization
- Core optimization loop
- Viusalizing optimized 3D output (i.e. textured asset creation)
š» Environment
Create a python 3.8 virtual environment and install requirements:
cd $ShAPO_Repo
conda create -y --prefix ./env python=3.8
conda activate ./env/
./env/bin/python -m pip install --upgrade pip
./env/bin/python -m pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
The code was built and tested on cuda 10.2
š Dataset
Download camera_train, camera_val, real_train, real_test, ground-truth annotations, camera_composed_depth, mesh models and eval_results provided by NOCS and nocs preprocess data.<br/> Also download sdf_rgb_pretrained_weights. Unzip and organize these files in $ShAPO_Repo/data as follows:
data
āāā CAMERA
ā āāā train
ā āāā val
āāā Real
ā āāā train
ā āāā test
āāā camera_full_depths
ā āāā train
ā āāā val
āāā gts
ā āāā val
ā āāā real_test
āāā results
ā āāā camera
ā āāā mrcnn_results
ā āāā nocs_results
ā āāā real
āāā sdf_rgb_pretrained
ā āāā LatentCodes
ā āāā Reconstructions
ā āāā ModelParameters
ā āāā OptimizerParameters
ā āāā rgb_net_weights
āāā obj_models
āāā train
āāā val
āāā real_train
āāā real_test
āāā camera_train.pkl
āāā camera_val.pkl
āāā real_train.pkl
āāā real_test.pkl
āāā mug_meta.pkl
Create image lists
./runner.sh prepare_data/generate_training_data.py --data_dir /home/ubuntu/shapo/data/nocs_data/
Now run distributed script to collect data locally in a few hours. The data would be saved under data/NOCS_data
.
Note: The script uses multi-gpu and runs 8 workers per gpu on a 16GB GPU. Change worker_per_gpu
variable depending on your GPU size.
python prepare_data/distributed_generate_data.py --data_dir /home/ubuntu/shapoplusplus/data/nocs_data --type camera_train
--type chose from 'camera_train', 'camera_val', 'real_train', 'real_val'
āØ Training and Inference
ShAPO is a two-stage process; First, a single-shot network to predict 3D shape, pose and size codes along with segmentation masks in a per-pixel manner. Second, test-time optimization of joint shape, pose and size codes given a single-view RGB-D observation of a new instance.
<p align="center"> <img src="demo/ShAPO_teaser.gif" width="100%"> </p>- Train on NOCS Synthetic (requires 13GB GPU memory):
./runner.sh net_train.py @configs/net_config.txt
Note than runner.sh is equivalent to using python to run the script. Additionally it sets up the PYTHONPATH and ShAPO Enviornment Path automatically. Also note that this part of the code is similar to CenterSnap. We predict implicit shapes as SDF MLP instead of pointclouds and additionally also predict appearance embedding and object masks in this stage.
- Finetune on NOCS Real Train (Note that good results can be obtained after finetuning on the Real train set for only a few epochs i.e. 1-5):
./runner.sh net_train.py @configs/net_config_real_resume.txt --checkpoint \path\to\best\checkpoint
- Inference on a NOCS Real Test Subset
Download a small Real test subset from here, our shape and texture decoder pretrained checkpoints from here and shapo pretrained checkpoints on real dataset here. Unzip and organize these files in $ShAPO_Repo/data as follows:
test_data
āāā Real
ā āāā test
| ckpts
āāā sdf_rgb_pretrained
āāā LatentCodes
āāā LatentCodes
āāā Reconstructions
āāā ModelParameters
āāā OptimizerParameters
āāā rgb_net_weights
Now run the inference script to visualize the single-shot predictions as follows:
bash
./runner.sh inference/inference_real.py @configs/net_config.txt --test_data_dir path_to_nocs_test_subset --checkpoint checkpoint_path_here
You should see the visualizations saved in results/ShAPO_real
. Change the --ouput_path in *config.txt to save them to a different folder
- Optimization
This is the core optimization script to update latent shape and appearance codes along with 6D pose and sizes to better the fit the unseen single-view RGB-D observation. For a quick run of the core optimization loop along with visualization, see this notebook here
./runner.sh opt/optimize.py @configs/net_config.txt --data_dir /path/to/test_data_dir/ --checkpoint checkpoint_path_here
š FAQ
Please see FAQs from CenterSnap here
Acknowledgments
- This code is built upon the implementation from CenterSnap
Related Work
<p align="center"> <img src="demo/reconstruction.gif" width="100%"> </p>Licenses
- This repository is released under the CC BY-NC 4.0 license.