Awesome
NPMs: Neural Parametric Models
Project Page | Paper | ArXiv | Video
<br /><p align="center"> <img width="100%" src="resources/teaser.gif"/> </p>NPMs: Neural Parametric Models for 3D Deformable Shapes <br /> Pablo Palafox, Aljaz Bozic, Justus Thies, Matthias Niessner, Angela Dai
Citation
@article{palafox2021npms
author = {Palafox, Pablo and Bo{\v{z}}i{\v{c}}, Alja{\v{z}} and Thies, Justus and Nie{\ss}ner, Matthias and Dai, Angela},
title = {NPMs: Neural Parametric Models for 3D Deformable Shapes},
journal = {arXiv preprint arXiv:2104.00702},
year = {2021},
}
Install
You can either pull our docker image, build it yourself with the provided Dockerfile or build the project from source.
Pull Docker Image
docker pull ppalafox/npms:latest
You can now run an interactive container of the image you just built (before that, navigate to npms):
cd npms
docker run --ipc=host -it --name npms --gpus=all -v $PWD:/app -v /cluster:/cluster npms:latest bash
Build Docker Image
Run the following from within the root of this project (where Dockerfile lives) to build a docker image with all required dependencies.
docker build . -t npms
You can now run an interactive container of the image you just built (before that, navigate to npms):
cd npms
docker run --ipc=host -it --name npms --gpus=all -v $PWD:/app -v /cluster:/cluster npms:latest bash
Of course, you'll have to specify you're own paths to the volumes you'd like to mount using the -v
flag.
Build from source
A linux system with cuda is required for the project.
The npms_env.yml file contains (hopefully) all necessary python dependencies for the project. To conveniently install them automatically with anaconda you can use:
conda env create -f npms_env.yml
conda activate npms
Other dependencies
We need some other dependencies. Starting from the root folder of this project, we'll do the following...
<!-- - Let's start by cloning [Eigen](https://gitlab.com/libeigen/eigen.git): ``` cd external git clone https://gitlab.com/libeigen/eigen.git ``` --> <!-- - Let's install [ChamferDistancePytorch](https://github.com/ThibaultGROUEIX/ChamferDistancePytorch.git). Since it uses JIT compilation, we don't need to do anything else other than cloning it: ``` git clone https://github.com/ThibaultGROUEIX/ChamferDistancePytorch.git cd .. ``` -->- Compile the csrc folder:
cd external/csrc
python setup.py install
cd ..
- We need some libraries from IFNet. In particular, we need
libmesh
andlibvoxelize
from that repo. They are already placed within external. (Check the corresponding LICENSE). To build these, proceed as follows:
cd libmesh/
python setup.py build_ext --inplace
cd ../libvoxelize/
python setup.py build_ext --inplace
cd ..
- Install gaps. For this, we are using a couple of scripts from LDIF, namely external/build_gaps.sh and external/gaps_is_installed.sh. We also need the folder external/qview, which also belongs to the LDIF project (it's already place within our external) folder. To build
gaps
, make sure you're within external and run:
chmod +x build_gaps.sh
./build_gaps.sh
You can make sure it's built properly by running:
chmod +x gaps_is_installed.sh
./gaps_is_installed.sh
You should get a "Ready to go!" as output.
-
We already have npms/data_processing/implicit_waterproofing.py, which belongs to the IFNet project, so nothing to do here (same IFNet LICENSE applies to this file).
-
We also need some helper functions from LDIF. Namely, base_util.py and file_util.py. We have placed them already under npms/utils.
You can now navigate back to the root folder: cd ..
Data Preparation
As an example, let's have a quick overview of what the process would look like in order to generate training data from the CAPE dataset.
Download their dataset, by registering and accepting their terms. Once you've followed their steps to download the dataset, you should have a folder named cape_release
.
In npms/configs_train/config_train_HUMAN.py, set the variable ROOT
to point to the folder where you want your data to live in. Then:
cd <ROOT>
mkdir data
And place cape_release
within data
.
Download SMPL models
Register here to get access to SMPL body models. Then, under the downloads tab, download the models. Refer to https://github.com/vchoutas/smplx#model-loading for more details.
From within the root folder of this project, run:
cd npms/body_model
mkdir smpl
And place the .pkl
files you just downloaded under npms/body_model/smpl
. Now change their names, such that you have something like:
body_models<br/> │── smpl<br/> │ │── smpl<br/> │ │ └── SMPL_FEMALE.pkl<br/> │ │ └── SMPL_MALE.pkl<br/> │ │ └── SMPL_NEUTRAL.pkl<br/>
Preprocess the raw CAPE
Now let's process the raw data in order to generate training samples for our NPM.
cd npms/data_processing
python prepare_cape_data.py
Then, we normalize the preprocessed dataset, such that the meshes reside within a bounding box with boundaries bbox_min=-0.5
and bbox_max=0.5
.
# We're within npms/data_processing
python normalize_dataset.py
At this point, we can generate training samples for both the shape and the pose MLP. An extra step would be required if our t-poses (<ROOT>/datasets/cape/a_t_pose/000000/mesh_normalized.ply
) were not watertight. We'd need to run multiview_to_watertight_mesh.py. Since CAPE is already watertight, we don't need to worry about this.
About labels.json
and labels_tpose.json
One last thing before actually generating the samples is to create some "labels" files that specify the paths to the dataset we wanna create. Under the folder ZSPLITS_HUMAN we have copied some examples.
Within it, you can find other folders containing datasets in the form of the paths to the actual data. For example, CAPE-SHAPE-TRAIN-35id, which in turn contains two files: labels_tpose and labels. They define datasets in a flexible way, by means of a list of dictionaries, where each dictionary holds the paths to a particular sample. You'll get a feeling of why we have a labels.json
and labels_tpose.json
by running the following sections to generate data, as well as when you dive into actually training a new NPM from scratch.
Go ahead and copy the folder ZSPLITS_HUMAN into <ROOT>/datasets
, where ROOT
is a path to your datasets that you can specify in npms/configs_train/config_train_HUMAN.py. If you followed along until now, within <ROOT>/datasets
you should already have the preprocessed <ROOT>/datasets/cape
dataset.
# Assuming you're in the root folder of the project
cp -r ZSPLITS_HUMAN <ROOT>/datasets
Note: within data_scripts you can find helpful scripts to generate your own
labels.json
andlabels_tpose.json
from a dataset. Check out the npms/data_scripts/README.md for a brief overview on these scripts.
SDF samples
Generate SDF samples around our identities in their t-pose in order to train the shape latent space.
# We're within npms/data_processing
python sample_boundary_sdf_gaps.py
Flow samples
Generate correspondences from an identity in its t-pose to its posed instances.
# We're within npms/data_processing
python sample_flow.py -sigma 0.01
python sample_flow.py -sigma 0.002
We're done with generating data for CAPE! This was just an example using CAPE, but as you've seen, the only thing you need to have is a dataset of meshes:
- we need t-pose meshes for each identity in the dataset, and we can use multiview_to_watertight_mesh.py to make these t-pose meshes watertight, to then sample points and their SDF values.
- for a given identity, we need to have surface correspondences between the t-pose and the posed meshes (but note that these posed meshes don't need to be watertight).
Training an NPM
Shape Latent Space
Set only_shape=True
in config_train_HUMAN.py. Then, from within the npms folder, start the training:
python train.py
Pose Latent Space
Set only_shape=False
in config_train_HUMAN.py. We now need to load the best checkpoint from training the shape MLP. For that, go to config_train_HUMAN.py, make sure init_from = True
in its first appearance in the file, and then set this same variable to your pretrained model name later in the file:
init_from = "<model_name>"
checkpoint = <the_epoch_number_you_want_to_load>
Then, from within the npms folder, start the training:
python train.py
Once we reach convergence, you're done. You know have latent spaces of shape and pose that you can play with.
You could:
-
fit your learned model to an monocular depth sequence (Fitting an NPM to a Monocular Depth Sequence)
-
interpolate between two shape codes, or between two pose codes (Latent-space Interpolation)
-
transfer poses from one identity to another (Shape and Pose Transfer)
Fitting an NPM to a Monocular Depth Sequence
Code Initialization
When fitting an NPM to monocular depth sequence, it is recommended that we have a relatively good initialization of our shape and pose codes to avoid falling into local minima. To this end, we are gonna learn a shape and a pose encoder that map an input depth map to a shape and pose code, respectively.
We basically use the shape and pose codes that we've learned during training time as targets for training the shape and pose encoders. You can use prepare_labels_shape_encoder.py and prepare_labels_pose_encoder.py to generate the dataset labels for this encoder training.
You basically have to train them like so:
python encode_shape_codes.py
python encode_pose_codes.py
And regarding the data you need for training the encoder...
Data preparation: Take a look at the scripts voxelize_multiview.py to prepare the single-view voxel grids that we require to train our encoders.
Test-time Optimization
Now you can fit NPMs to an input monocular depth sequence:
python fit_npm.py -o -d HUMAN -e <EXTRA_NAME_IF_YOU_WANT>
The -o
flag for optimize
; the -d
flag for the kind of dataset (HUMAN
, MANO
) and the -e
flag for appending a string to the name of the current optimization run.
You'll have to take a look at config_eval_HUMAN.py and set the name of your trained model (exp_model
) and its hyperparameters, as well as the dataset name dataset_name
you want to evaluate on.
It's definitely not the cleanest and easiest config file, sorry for that!
Data preparation: Take a look at the scripts compute_partial_sdf_grid.py to prepare the single-view SDF grid that we assume as input at test-time.
Visualization
With the following script you can visualize your fitting. Have a look at config_viz_OURS.py and set the name of your trained model (exp_model
) as well as the name of your optimization run (run_name
) of test-time fitting you just computed.
python viz_all_methods.py -m NPM -d HUMAN
There are a bunch of other scripts for visualization. They're definitely not cleaned-up, but I kept them here anyways in case they might be useful for you as a starting point.
Compute metrics
python compute_errors.py -n <name_of_optimization_run>
Latent-space Interpolation
Check out the files:
Shape and Pose Transfer
Check out the files:
Pretrained Models
Download pre-trained models here
License
NPMs is relased under the MIT License. See the LICENSE file for more details.
Check the corresponding LICENSES of the projects under the external folder.
For instance, we make use of libmesh and libvoxelize, which come from IFNets. Please check their LICENSE.
We need some helper functions from LDIF. Namely, base_util.py and file_util.py, which should be already under utils. Check the license and copyright in those files.