Awesome
3D-CODED [Project Page] [Paper] [Talk] + Learning Elementary Structure [Project Page] [Paper] [Code]
3D-CODED : 3D Correspondences by Deep Deformation <br>Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry <br> In ECCV, 2018.
Learning elementary structures for 3D shape generation and matching <br>Theo Deprelle, Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry <br> In Neurips, 2019. Official Code
A note on data.
Data download should be automatic. However, due to the new google drive traffic caps, you may have to download manually. If you run into an error during the data download, you can refer to https://github.com/ThibaultGROUEIX/AtlasNet/issues/61.
You can manually download the data from these sources :
- Google drive datas_surreal_test.pth: https://drive.google.com/file/d/1VGax9j64AvCVORtiQ3ZSPecI0bfZrEVe/view?usp=sharing
- Google drive datas_surreal_train.pth: https://drive.google.com/file/d/1HVReM43YtJqhGfbmE58dc1-edI_oz9YG/view?usp=sharing
<details open><summary>Learned templates</summary> <details><summary>Faust results</summary>
Method | L2 Train SURREAL | L2 Val SURREAL | Faust Intra results | Faust Inter results |
---|---|---|---|---|
3D-CODED | 1.098 | 1.315 | 1.747 | 2.641 |
Points Translation 3D | 9.980 | 1.263 | 1.626 | 2.714 |
Patch Deformation 3D | 1.028 | 1.436 | 1.742 | 2.578 |
Points Translation + Patch Deformation 3D | 0.969 | 1.173 | 1.676 | 2.779 |
Points Translation 2D | 1.09 | 1.54 | 2.054 | 3.005 |
Patch Deformation 2D | 6.354 | 6.767 | 4.46 | 5.420 |
Points Translation 10D | 0.906 | 1.064 | 1.799 | 2.707 |
Patch Deformation 10D | 0.952 | 1.183 | 1.683 | 2.83 |
<img src="README/mesh25.ply.gif" style="zoom:80%" /><img src="README/25RecBestRotReg.ply.gif" style="zoom:80%" />
<img src="README/mesh8.ply.gif" style="zoom:80%" /><img src="README/8RecBestRotReg.ply.gif" style="zoom:80%" />
</details>Install :construction_worker: [Pytorch, Conda]
This implementation uses Pytorch.
git clone https://github.com/ThibaultGROUEIX/3D-CODED.git ## Download the repo
cd 3D-CODED; git submodule update --init
conda env create -f 3D-CODED-ENV.yml ## Create python env
source activate pytorch-3D-CODED
pip install http://imagine.enpc.fr/~langloip/data/pymesh2-0.2.1-cp37-cp37m-linux_x86_64.whl
cd extension; python setup.py install; cd ..
Demo :train2: and Inference Trained Models
python inference/correspondences.py --dir_name learning_elementary_structure_trained_models/1patch_deformation
This script takes as input 2 meshes from data
and compute correspondences in results
. Reconstruction are saved in dir_name
.
# Key parameters
'--dir_name', type=str, default="", help='dirname')
'--inputA', type=str, default = "data/example_0.ply", help='your path to mesh 0'
'--inputB', type=str, default = "data/example_1.ply", help='your path to mesh 1'
# Secondary parameters
'--HR', type=int, default=1, help='Use high Resolution template for better precision in the nearest neighbor step ?'
'--reg_num_steps', type=int, default=3000, help='number of epochs to train for during the regression step'
'--num_points', type=int, default = 6890, help='number of points fed to poitnet'
'--num_angles', type=int, default = 100, help='number of angle in the search of optimal reconstruction. Set to 1, if you mesh are already facing the cannonical direction as in data/example_1.ply'
'--env', type=str, default="CODED", help='visdom environment'
'--clean', type=int, default=0, help='if 1, remove points that dont belong to any edges'
'--scale', type=int, default=0, help='if 1, scale input mesh to have same volume as the template'
'--project_on_target', type=int, default=0, help='if 1, projects predicted correspondences point on target mesh'
'--randomize', type=int, default=0, help='if 1, projects predicted correspondences point on target mesh'
'--LR_input', type=int, default=1, help='Use Low Resolution Input '
</details>
<details><summary>Results </summary>
- Initial guesses for example0 and example1:
<img src="README/example_0InitialGuess.ply.gif" style="zoom:80%" /><img src="README/example_1InitialGuess.ply.gif" style="zoom:80%" />
- Final reconstruction for example0 and example1:
<img src="README/example_0FinalReconstruction.ply.gif" style="zoom:80%" /><img src="README/example_1FinalReconstruction.ply.gif" style="zoom:80%" />
</details> <details><summary>On your own meshes </summary>You need to make sure your meshes are preprocessed correctly :
-
The meshes are loaded with Trimesh, which should support a bunch of formats, but I only tested
.ply
files. Good converters include Assimp and Pymesh. -
The trunk axis is the Y axis (visualize your mesh against the mesh in
data
to make sure they are normalized in the same way). -
the scale should be about 1.7 for a standing human (meaning the unit for the point cloud is the
m
). You can automatically scale them with the flag--scale 1
--> Failure modes instruction : :warning:
-
Sometimes the reconstruction is flipped, which break the correspondences. In the easiest case where you meshes are registered in the same orientation, you can just fix this angle in
correspondence.py
line 240, to avoid the flipping problem. Also note from this line that the angle search only looks in [-90°,+90°]. -
Check the presence of lonely outliers that break the Pointnet encoder. You could try to remove them with the
--clean
flag.
- If you want to use
inference/correspondences.py
to process a hole dataset, like FAUST test set, you can use./inference/script.py
, for the FAUST inter challenge. Good luck :-)
Training
python ./training/train.py
<details><summary> Trainer's Options</summary>
'--point_translation', type=int, default=0, help='point_translation'
'--dim_template', type=int, default=3, help='dim_template'
'--patch_deformation', type=int, default=0, help='patch_deformation'
'--dim_out_patch', type=int, default=3, help='dim_out_patch'
'--start_from', type=str, default="TEMPLATE", choices=["TEMPLATE, SOUP, TRAINDATA"] ,help='dim_out_patch'
</details>
<details><summary> Monitor your training on http://localhost:8888/</summary>
</details>
<details><summary> Note on data preprocessing </summary>
The generation process of the dataset is quite heavy so we provide our processed data. Should you want to reproduce the preprocessing, go to data/README.md
. Brace yourselve :-)
Reproduce the paper :train2:
python script/launch.py --mode training #Launch 4 trainings with different parameters.
python script/launch.py --mode inference #Eval the 4 pre-trained models.
Citing this work
If you find this work useful in your research, please consider citing:
@inproceedings{deprelle2019learning,
title={Learning elementary structures for 3D shape generation and matching},
author={Deprelle, Theo and Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G and Russell, Bryan C and Aubry, Mathieu},
booktitle={Neurips},
year={2019}
}
@inproceedings{groueix2018b,
title = {3D-CODED : 3D Correspondences by Deep Deformation},
author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
booktitle = {ECCV},
year = {2018}
}
License
Cool Contributions
- Zhongshi Jiang applying trained model on a monster model :japanese_ogre: (left: original , right: reconstruction)
Acknowledgement
- The code for the Chamfer Loss was adapted from Fei Xia'a repo : PointGan. Many thanks to him !
- The code for the Laplacian regularization comes from Angjoo Kanazawa and Shubham Tulsiani. This was so helpful, thanks !
- Part of the SMPL parameters used in the training data comes from Gül Varol's repo : https://github.com/gulvarol/surreal But most of all, thanks for all the advices :)
- The FAUST Team for their prompt reaction in resolving a benchmark issue the week of the deadline, especially to Federica Bogo and Jonathan Williams.
- The efficient code for to compute geodesic errors comes from https://github.com/zorah/KernelMatching. Thanks!
- The SMAL team, and SCAPE team for their help in generating the training data.
- DeepFunctional Maps authors for their fast reply the week of the rebuttal ! Many thanks.
- Hiroharu Kato for his very clean neural renderer code, that I used for the gifs :-)
- Pytorch developpers for making DL code so easy.
- This work was funded by Ecole Doctorale MSTIC. Thanks !
- And last but not least, my great co-authors : Theo Deprelle, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, and Mathieu Aubry