Awesome
:space_invader: VADER :space_invader:
Pytorch code for "Generalizable Local Feature Pre-training for Deformable Shape Analysis" - CVPR 2023
You underestimate the power of the local side!
:construction_worker: Installation
- Install Dependencies: This implementation requires Python 3.7 or newer. Install dependencies using pip:
pip install -r requirements.txt
- Install DiffVoxel: Navigate to the
diffvoxel
folder and execute:
python setup.py bdist_wheel
pip install --upgrade dist/diffvoxel-0.0.1-*.whl
- Install PointNet2: Navigate to the
Pointnet2_PyTorch/pointnet2_ops_lib
folder and execute:
python setup.py bdist_wheel
pip install --upgrade dist/pointnet2_ops-3.0.0-*.whl (or the version you have)
:book: Usage
In this repository, we provide the code for pre-training our network to learn local features that can generalizable across different shapes categories, As well as the code for extracting the VADER features used in downstream tasks.
Our paper presents new insights into the transferability of features from networks trained on non-deformable shapes. Once the network is pretrained (we provide pretrained weights), VADER features can be extracted and used as replacements for traditional input features (like XYZ or HKS) in any downstream task.
For all experiments, we adapted the code from Diffusion-Net, by substituting their input features with our VADER features. Visit their repository for detailed usage instructions.
-
Architecture Code: Located in the UPDesc folder.
-
Pretrained Models: Two models pretrained on the 3DMatch dataset are provided in the
UPDesc/demo/trained_models
folder, one using supervised NCE loss and the other using unsupervised cycle loss. -
Extracting VADER Features: Use the
extract_vader.py
script inUPDesc/demo/
as follows:python3 extract_vader.py --model UPDescUniScale --ckpt ./trained_models/name_of_pretrained_model/weights.ckpt --hparams ./trained_models/name_of_pretrained_model/hparams.yaml --data_root ./path/to/data --scale 6.0 --out_root ./path/to/save
where the scale parameter is the scale by which the receptive field of the network is multiplied. This can either be found using the optimization method using the MMD loss as described in the paper, or empirically (we found that scales between 5 and 6.5 works better for area normalized human shapes, and scales between 4 and 6 works better for L2 normalized RNA shapes).
:chart_with_upwards_trend: Results
If you wish to report our result, we have summarized them below. Our method is referred to as VADER. X on Y
indicates that the method was trained on dataset X
and tested on dataset Y
.
-
Near Isometric Shape Matching: We provide results on the FAUST (F), Scape (S) and Shrec (SH) datasets. We used the remeshed version of the datasets. We report the mean geodesic error, following the protocol used in all deep functional map papers. Our method is unsupervised.
Method F on F S on S F on S S on F F on SH S on SH VADER 3.9 4.2 4.1 3.9 6.4 6.9 -
Molecular Surface Segmentation: We provide results on the RNA molecules dataset. We report the mean Accuracy, following the same protocol as in the original paper. Our method is supervised. We provide the results for training on the full dataset, on only 50 shapes, and on only 100 shapes.
Method Full Dataset 50 Shapes 100 Shapes VADER 92.6 ± 0.02% 83.2 ± 0.20% 86.8 ± 0.09% -
Partial Animal Matching: We provide results on the SHREC16’ Cuts dataset. We report the mean geodesic error, following the same protocol as in all the deep functional maps papers. Our method is supervised.
Method SHREC16’ Cuts dataset VADER 3.7
:mortar_board: Citation
If you find this work useful in your research, please consider citing:
@inproceedings{attaiki2023vader,
title={Generalizable Local Feature Pre-training for Deformable Shape Analysis},
author={Souhaib Attaiki and Lei Li and Maks Ovsjanikov},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}