Home

Awesome

PoseFromShape

(BMVC 2019) PyTorch implementation of Paper "Pose from Shape: Deep Pose Estimation for Arbitrary 3D Objects" [PDF] [Project webpage]

<p align="center"> <img src="https://github.com/YoungXIAO13/PoseFromShape/blob/master/img/teaser_test.png" width="400px" alt="teaser"> </p>

If our project is helpful for your research, please consider citing:

@INPROCEEDINGS{Xiao2019PoseFromShape,
    author    = {Yang Xiao and Xuchong Qiu and Pierre{-}Alain Langlois and Mathieu Aubry and Renaud Marlet},
    title     = {Pose from Shape: Deep Pose Estimation for Arbitrary {3D} Objects},
    booktitle = {British Machine Vision Conference (BMVC)},
    year      = {2019}}

Updates 2020 July

The generated point clouds of Pascal3D and ObjectNet3D can be directly downloaded from our repo.

Please see ./data/Pascal3D/pointcloud and ./data/ObjectNet3D/pointcloud

Table of Content

Installation

Dependencies

The code can be used in Linux system with the the following dependencies: Python 3.6, Pytorch 1.0.1, Python-Blender 2.77, meshlabserver

We recommend to utilize conda environment to install all dependencies and test the code.

## Download the repository
git clone 'https://github.com/YoungXIAO13/PoseFromShape'
cd PoseFromShape

## Create python env with relevant packages
conda create --name PoseFromShape --file auxiliary/spec-file.txt
source activate PoseFromShape
conda install -c conda-forge matplotlib

## Install blender as a python module
conda install auxiliary/python-blender-2.77-py36_0.tar.bz2

Data

To download and prepare the datasets for training and testing (Pascal3D, ObjectNet3D, ShapeNetCore, SUN397, Pix3D, LineMod):

cd data
bash prepare_data.sh

To generate point cloud from the .obj file for Pascal3D and ObjectNet3D, check the data folder

Pre-trained Models

To download the pretrained models (Pascal3D, ObjectNet3D, ShapeNetCore):

cd model
bash download_models.sh

Training

To train on the ObjectNet3D dataset with real images and coarse alignment:

bash run/train_ObjectNet3D.sh

To train on the Pascal3D dataset with real images and coarse alignment:

bash run/train_Pascal3D.sh

To train on the ShapeNetCore dataset with synthetic images and precise alignment:

bash run/train_ShapeNetCore.sh

Testing

While the network was trained on real or synthetic images, all the testing was done on real images.

ObjectNet3D

bash run/test_ObjectNet3D.sh

You should obtain the results in Table 1 in the paper (*indicates testing on the novel categories):

MethodAveragebedbookcasecalculatorcellphonecomputerdoorcabinetguitarironknifemicrowavepenpotrifleshoeslipperstovetoilettubwheelchair
StarMap567378915782-84733189413564-1287715160
StarMap*423769195273-7861298812510-1182414914
Ours(MV)738290956593978975523295548245674695826766
Ours(MV)*62659088658493846722994477915543289616839

Pascal3D+

To test on the Pascal3D dataset with real images:

bash run/test_Pascal3D.sh

You should obtain the results in Table 2 in the paper (*indicates category-agnostic):

MethodAccuracyMedian Error
Keypoints and Viewpoints80.7513.6
Render for CNN82.0011.7
Mousavian81.0311.1
Grabner83.9210.9
Grabner*81.3311.5
StarMap*81.6712.8
Ours(MV)*82.6610.0

Pix3D

bash run/test_Pix3D.sh

You should obtain the results in Table 3 in the paper (Accuracy / MedErr):

MethodBedChairDesk
Georgakis50.8 / 28.631.2 / 57.334.9 / 51.6
Ours(MV)59.8 / 20.052.4 / 26.656.6 / 26.6

Demo

In order to test on other 3D model, first you need to generate multiviews from .obj file by running python ./data/render_utils.py with the correct path and you should save the testing images picturing this model in a folder.

Then you can run bash ./demo/inference.sh to get predictions and images rendered under the predicted pose with the right model_path, image_path, render_path, obj_path.

Some example of applying our model trained on objects of ObjectNet3D with keypoint annotations to armadillo images can be seen below:

Input Image 1Input Image 2Input Image 3Input Image 4Input Image 5
Prediction 1Prediction 2Prediction 3Prediction 4Prediction 5

Further Reading

License

MIT