Awesome
PyTorch PPF-FoldNet
This repo is the unofficial implementation for PPF-FoldNet(https://arxiv.org/abs/1808.10322v1) in pytorch.
Project Structure
models/
: dictionary that saves the model of PPF-FoldNet. PPF-FoldNet is an Auto-Encoder for point pair feature of a local patch. The input is a batch of point cloud fragments[bs(num_patches), num_points_per_patch, 4]
, output of the Encoder is the descriptor for each local patch in these point cloud,[bs(num_patches), 1, 512]
, where 512 is the default codeword length.models_conv1d.py
: PPF-FoldNet model using conv1d layers.models_linear.py
: PPF-FoldNet model using linear layers. Theoretically, nn.Conv1d and nn.Linear should be same when(kernel_size=1, stride=1, padding=0, dilation=1)
. You can trymisc/linear_conv1d.py
for the experiment.
input_preparation.py
: used before training, including:- read point cloud and voxel downsample.
- choose reference point(or interest point) from the point cloud.
- collect neighboring points near each reference point.
- build local patch for each reference point and their neighbor.
- save the local patch as numpy array for later use.
- I also write a function for preparing the ppf input on the fly.
dataset.py
: define the Dataset, read from the files generated from input prepration stage or on-the-fly.dataloader.py
: define the dataloader.loss.py
: define the Chamfer Loss. (Earth Move Distance Loss is worth trying.)trainer.py
: the trainer class, handle the training process including snapshot.train.py
: the entrance file, every time I start training, this file will be copied to the snapshot dictionary.geometric_registration/
: dictionary for evaluating the model through the task of geometric registrationgt_result/
: the ground truth information provided by 3DMatch benchmark.preparation.py
: calculate the descriptor for each interest point provided by 3DMatch Benchmark. (need to calculate the ppf representation for each interest point first)evaluate_ppfnet.py
: using the evaluation metric proposed in PPF-FoldNet paper to evaluate the performance of the descriptor.- get point cloud from
.ply
file and get the interest point coordinate from.keypts.bin
file. - use the descriptor generated by
preparation.py
to register each pair of point clouds fragment and save the result inpred_result/
- after register each pair of fragment, we can get the final
recall
of the descriptors.
- get point cloud from
utils.py
misc/
Data
rgbd_fragments/
: fragments of training set.intermediate-files-real/
: dictionary that saves the keypoints coordinates for each scene.fragments/
: fragments of test set.
Prepara the date
Use script/download.sh
to download all the training set from 3DMatch, and script/fuse_fragments_3DMatch.py
to fuse rgbd frames into fragments.
And the intermediate file is downloaded from this link
Train the model
python train.py
All the configuration is in the train.py
. When you start training, the train.py
and model.py
will be saved in the snapshot/
folder and the tensorboard file is saved in tensorboard/
.
Evaluate the model
See Geometric Registration for detail
Performance
Model | Average Recall |
---|---|
PPF-FoldNet | 69.3% |
Origin Paper | 71.8% |
3DMatch | 57.3% |
The model with best performance is in folder pretrained/