Home

Awesome

3D-CODED [Project Page] [Paper] [Talk] + Learning Elementary Structure [Project Page] [Paper] [Code]

3D-CODED : 3D Correspondences by Deep Deformation <br>Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry <br> In ECCV, 2018.

Learning elementary structures for 3D shape generation and matching <br>Theo Deprelle, Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry <br> In Neurips, 2019. Official Code

A note on data.

Data download should be automatic. However, due to the new google drive traffic caps, you may have to download manually. If you run into an error during the data download, you can refer to https://github.com/ThibaultGROUEIX/AtlasNet/issues/61.

You can manually download the data from these sources :


<details open><summary>Learned templates</summary>

image-20190912160913089

<details><summary>Faust results</summary>
MethodL2 Train SURREALL2 Val SURREALFaust Intra resultsFaust Inter results
3D-CODED1.0981.3151.7472.641
Points Translation 3D9.9801.2631.6262.714
Patch Deformation 3D1.0281.4361.7422.578
Points Translation + Patch Deformation 3D0.9691.1731.6762.779
Points Translation 2D1.091.542.0543.005
Patch Deformation 2D6.3546.7674.465.420
Points Translation 10D0.9061.0641.7992.707
Patch Deformation 10D0.9521.1831.6832.83
</details> </details> <details><summary>Sample results</summary> Input : 2 meshes<br> Task : put them in point-wise correspondence. (suggested by color)

<img src="README/mesh25.ply.gif" style="zoom:80%" /><img src="README/25RecBestRotReg.ply.gif" style="zoom:80%" />

<img src="README/mesh8.ply.gif" style="zoom:80%" /><img src="README/8RecBestRotReg.ply.gif" style="zoom:80%" />

</details>

Install :construction_worker: [Pytorch, Conda]

This implementation uses Pytorch.

git clone https://github.com/ThibaultGROUEIX/3D-CODED.git ## Download the repo
cd 3D-CODED; git submodule update --init
conda env create -f 3D-CODED-ENV.yml ## Create python env
source activate pytorch-3D-CODED
pip install http://imagine.enpc.fr/~langloip/data/pymesh2-0.2.1-cp37-cp37m-linux_x86_64.whl
cd extension; python setup.py install; cd ..

Demo :train2: and Inference Trained Models

python inference/correspondences.py --dir_name learning_elementary_structure_trained_models/1patch_deformation

This script takes as input 2 meshes from data and compute correspondences in results. Reconstruction are saved in dir_name.

<details><summary>Options (Usually default parameters are good)</summary>
# Key parameters
'--dir_name', type=str, default="",  help='dirname')
'--inputA', type=str, default =  "data/example_0.ply",  help='your path to mesh 0'
'--inputB', type=str, default =  "data/example_1.ply",  help='your path to mesh 1'

# Secondary parameters
'--HR', type=int, default=1, help='Use high Resolution template for better precision in the nearest neighbor step ?'
'--reg_num_steps', type=int, default=3000, help='number of epochs to train for during the regression step'
'--num_points', type=int, default = 6890,  help='number of points fed to poitnet'
'--num_angles', type=int, default = 100,  help='number of angle in the search of optimal reconstruction. Set to 1, if you mesh are already facing the cannonical 				direction as in data/example_1.ply'
'--env', type=str, default="CODED", help='visdom environment'
'--clean', type=int, default=0, help='if 1, remove points that dont belong to any edges'
'--scale', type=int, default=0, help='if 1, scale input mesh to have same volume as the template'
'--project_on_target', type=int, default=0, help='if 1, projects predicted correspondences point on target mesh'
'--randomize', type=int, default=0, help='if 1, projects predicted correspondences point on target mesh'
'--LR_input', type=int, default=1, help='Use Low Resolution Input '
</details> <details><summary>Results </summary>

<img src="README/example_0InitialGuess.ply.gif" style="zoom:80%" /><img src="README/example_1InitialGuess.ply.gif" style="zoom:80%" />

<img src="README/example_0FinalReconstruction.ply.gif" style="zoom:80%" /><img src="README/example_1FinalReconstruction.ply.gif" style="zoom:80%" />

</details> <details><summary>On your own meshes </summary>

You need to make sure your meshes are preprocessed correctly :

--> Failure modes instruction : :warning:

</details> <details><summary>FAUST </summary> </details>

Training

python ./training/train.py
<details><summary> Trainer's Options</summary>
'--point_translation', type=int, default=0, help='point_translation'
'--dim_template', type=int, default=3, help='dim_template'
'--patch_deformation', type=int, default=0, help='patch_deformation'
'--dim_out_patch', type=int, default=3, help='dim_out_patch'
'--start_from', type=str, default="TEMPLATE", choices=["TEMPLATE, SOUP, TRAINDATA"] ,help='dim_out_patch'
</details> <details><summary> Monitor your training on http://localhost:8888/</summary>

visdom

</details> <details><summary> Note on data preprocessing </summary>

The generation process of the dataset is quite heavy so we provide our processed data. Should you want to reproduce the preprocessing, go to data/README.md. Brace yourselve :-)

</details>

Reproduce the paper :train2:

python script/launch.py --mode training #Launch 4 trainings with different parameters.
python script/launch.py --mode inference #Eval the 4 pre-trained models.

Citing this work

If you find this work useful in your research, please consider citing:

@inproceedings{deprelle2019learning,
  			title={Learning elementary structures for 3D shape generation and matching},
  			author={Deprelle, Theo and Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G and Russell, Bryan C and Aubry, Mathieu},
  			booktitle={Neurips},
  			year={2019}
}

@inproceedings{groueix2018b,
          title = {3D-CODED : 3D Correspondences by Deep Deformation},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle = {ECCV},
          year = {2018}
        }

License

MIT

Cool Contributions

visdom

Analytics

Acknowledgement