Awesome
point2vec
Self-Supervised Representation Learning on Point Clouds
Installation
1. Dependencies
- Python 3.10.4
- CUDA 11.6
- cuDNN 8.4.0
- GCC >= 6 and <= 11.2.1
pip install -U pip wheel
pip install torch torchvision -c requirements.txt --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt
2. Datasets
See DATASETS.md for download instructions.
3. Check (optional)
python -m point2vec.datasets.process.check # check if datasets are complete
./scripts/test.sh # check if training works
Model Zoo
Type | Dataset | Evaluation | Config | Checkpoint |
---|---|---|---|---|
Point2vec pre-trained | ShapeNet | - | config | checkpoint |
Classification fine-tuned | ModelNet40 | 94.65 / 94.77 (OA / Voting) | A & B | checkpoint |
Classification fine-tuned | ScanObjectNN | 87.47 (OA) | A & B | checkpoint |
Part segmentation fine-tuned | ShapeNetPart | 84.59 (Cat. mIoU) | config | checkpoint |
Reproducing the results from the paper
The scripts in this section use Weights & Biases for logging, so it's important to log in once with wandb login
before running them.
Checkpoints will be saved to the artifacts
directory.
A note on reproducibility:
While reproducing our results on most datasets is straightforward, achieving the same test accuracy on ModelNet40 is more complicated due to the high variance between runs (see also https://github.com/Pang-Yatian/Point-MAE/issues/5#issuecomment-1074886349, https://github.com/ma-xu/pointMLP-pytorch/issues/1#issuecomment-1062563404, https://github.com/CVMI-Lab/PAConv/issues/9#issuecomment-886612074).
To obtain comparable results on ModelNet40, you will likely need to experiment with a few different seeds.
However, if you can precisely replicate our test environment, including installing CUDA 11.6, cuDNN 8.4.0, Python 3.10.4, and the dependencies listed in the requirements.txt
file, as well as using a Volta GPU (e.g. Nvidia V100), you should be able to replicate our experiments exactly.
Using our exact environment is necessary to ensure that you obtain the same random state during training, as a seed alone does not guarantee reproducibility across different environments.
Point2vec pre-training on ShapeNet
./scripts/pretraining_shapenet.bash --data.in_memory true
<details>
<summary>Training curve</summary>
</details>
Classification fine-tuning on ScanObjectNN
Replace XXXXXXXX
with the WANDB_RUN_ID
from the pre-training run, or use the checkpoint from the model zoo.
./scripts/classification_scanobjectnn.bash --config configs/classification/_pretrained.yaml --model.pretrained_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=799-step=64800.ckpt
<details>
<summary>Training curve</summary>
</details>
Classification fine-tuning on ModelNet40
Replace XXXXXXXX
with the WANDB_RUN_ID
from the pre-training run, or use the checkpoint from the model zoo.
./scripts/classification_modelnet40.bash --config configs/classification/_pretrained.yaml --model.pretrained_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=799-step=64800.ckpt --seed_everything 1
<details>
<summary>Training curve</summary>
</details>
Voting on ModelNet40
Replace XXXXXXXX
with the WANDB_RUN_ID
from the fine-tuning run, and epoch=XXX-step=XXXXX-val_acc=0.XXXX.ckpt
with the best checkpoint from that run, or use the checkpoint from the model zoo.
./scripts/voting_modelnet40.bash --finetuned_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=XXX-step=XXXXX-val_acc=0.XXXX.ckpt
<details>
<summary>Voting Process</summary>
</details>
Classification fine-tuning on ModelNet Few-Shot
Replace XXXXXXXX
with the WANDB_RUN_ID
from the pre-training run, or use the checkpoint from the model zoo.
You may also pass e.g. --data.way 5
or --data.shot 20
to select the desired m-way–n-shot setting.
for i in $(seq 0 9);
do
SLURM_ARRAY_TASK_ID=$i ./scripts/classification_modelnet_fewshot.bash --model.pretrained_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=799-step=64800.ckpt
done
Part segmentation fine-tuning on ShapeNetPart
Replace XXXXXXXX
with the WANDB_RUN_ID
from the pre-training run, or use the checkpoint from the model zoo.
./scripts/part_segmentation_shapenetpart.bash --model.pretrained_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=799-step=64800.ckpt
<details>
<summary>Training curve</summary>
</details>
Baselines
<details> <summary>Expand</summary>Data2vec–pc
Replace the pre-training step with:
./scripts/pretraining_shapenet.bash --data.in_memory true --model.learning_rate 2e-3 --model.decoder false --trainer.devices 2 --data.batch_size 1024 --model.fix_estimated_stepping_batches 16000
If you only have a single GPU (and enough VRAM), you may replace --trainer.devices 2 --data.batch_size 1024 --model.fix_estimated_stepping_batches 16000
with --data.batch_size 2048
.
From scratch
Skip the pre-training step, and omit all occurences of --config configs/classification/_pretrained.yaml
and --model.pretrained_ckpt_path ...
.
Visualization
We use PCA to project the learned representations into RGB space. Both a random initialization and data2vec–pc pre-training show a fairly strong positional bias, whereas point2vec exhibits a stronger semantic grouping without being trained on downstream dense prediction tasks.
Citing point2vec
If you use point2vec in your research, please use the following BibTeX entry.
@inproceedings{abouzeid2023point2vec,
title={Point2Vec for Self-Supervised Representation Learning on Point Clouds},
author={Abou Zeid, Karim and Schult, Jonas and Hermans, Alexander and Leibe, Bastian},
journal={German Conference on Pattern Recognition (GCPR)},
year={2023},
}