Home

Awesome

🪖 ReCon: Contrast with Reconstruct

PWC PWC PWC PWC PWC PWC PWC

Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining ICML 2023 <br> Zekun Qi*, Runpei Dong*, Guofan Fan, Zheng Ge, Xiangyu Zhang, Kaisheng Ma and Li Yi <br>

OpenReview | arXiv | Models

This repository contains the code release of ReCon: Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining (ICML 2023). ReCon is also short for reconnaissance 🪖.

Contrast with Reconstruct (ICML 2023)

<div align="center"> <img src="./figure/framework.png" width = "1100" align=center /> </div>

News

1. Requirements

PyTorch >= 1.7.0; python >= 3.7; CUDA >= 9.0; GCC >= 4.9; torchvision;

# Quick Start
conda create -n recon python=3.10 -y
conda activate recon

conda install pytorch==2.0.1 torchvision==0.15.2 cudatoolkit=11.8 -c pytorch -c nvidia
# pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 -f https://download.pytorch.org/whl/torch_stable.html
# Install basic required packages
pip install -r requirements.txt
# Chamfer Distance
cd ./extensions/chamfer_dist && python setup.py install --user
# PointNet++
pip install "git+https://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"

2. Datasets

We use ShapeNet, ScanObjectNN, ModelNet40 and ShapeNetPart in this work. See DATASET.md for details.

3. ReCon Models

TaskDatasetConfigAcc.Checkpoints Download
Pre-trainingShapeNetpretrain_base.yamlN.A.ReCon
ClassificationScanObjectNNfinetune_scan_hardest.yaml91.26%PB_T50_RS
ClassificationScanObjectNNfinetune_scan_objbg.yaml95.35%OBJ_BG
ClassificationScanObjectNNfinetune_scan_objonly.yaml93.80%OBJ_ONLY
ClassificationModelNet40(1k)finetune_modelnet.yaml94.5%ModelNet_1k
ClassificationModelNet40(8k)finetune_modelnet_8k.yaml94.7%ModelNet_8k
Zero-ShotModelNet10zeroshot_modelnet10.yaml75.6%ReCon zero-shot
Zero-ShotModelNet10*zeroshot_modelnet10.yaml81.6%ReCon zero-shot
Zero-ShotModelNet40zeroshot_modelnet40.yaml61.7%ReCon zero-shot
Zero-ShotModelNet40*zeroshot_modelnet40.yaml66.8%ReCon zero-shot
Zero-ShotScanObjectNNzeroshot_scan_objonly.yaml43.7%ReCon zero-shot
Linear SVMModelNet40svm.yaml93.4%ReCon svm
Part SegmentationShapeNetPartsegmentation86.4% mIoUpart seg
TaskDatasetConfig5w10s (%)5w20s (%)10w10s (%)10w20s (%)Download
Few-shot learningModelNet40fewshot.yaml97.3 ± 1.998.9 ± 1.293.3 ± 3.995.8 ± 3.0ReCon

The checkpoints and logs have been released on Google Drive. You can use the voting strategy in classification testing to reproduce the performance reported in the paper. For classification downstream tasks, we randomly select 8 seeds to obtain the best checkpoint. For zero-shot learning, * means that we use all the train/test data for zero-shot transfer.

4. ReCon Pre-training

Pre-training with the default configuration, run the script:

sh scripts/pretrain.sh <GPU> <exp_name>

If you want to try different models or masking ratios etc., first create a new config file, and pass its path to --config.

CUDA_VISIBLE_DEVICES=<GPU> python main.py --config <config_path> --exp_name <exp_name>

5. ReCon Classification Fine-tuning

Fine-tuning with the default configuration, run the script:

bash scripts/cls.sh <GPU> <exp_name> <path/to/pre-trained/model>

Or, you can use the command.

Fine-tuning on ScanObjectNN, run:

CUDA_VISIBLE_DEVICES=<GPUs> python main.py --config cfgs/full/finetune_scan_hardest.yaml \
--finetune_model --exp_name <exp_name> --ckpts <path/to/pre-trained/model>

Fine-tuning on ModelNet40, run:

CUDA_VISIBLE_DEVICES=<GPUs> python main.py --config cfgs/full/finetune_modelnet.yaml \
--finetune_model --exp_name <exp_name> --ckpts <path/to/pre-trained/model>

6. ReCon Test&Voting

Test&Voting with the default configuration, run the script:

bash scripts/test.sh <GPU> <exp_name> <path/to/best/fine-tuned/model>

or:

CUDA_VISIBLE_DEVICES=<GPUs> python main.py --test --config cfgs/finetune_modelnet.yaml \
--exp_name <output_file_name> --ckpts <path/to/best/fine-tuned/model>

7. ReCon Few-Shot

Few-shot with the default configuration, run the script:

sh scripts/fewshot.sh <GPU> <exp_name> <path/to/pre-trained/model> <way> <shot> <fold>

or

CUDA_VISIBLE_DEVICES=<GPUs> python main.py --config cfgs/full/fewshot.yaml --finetune_model \
--ckpts <path/to/pre-trained/model> --exp_name <exp_name> --way <5 or 10> --shot <10 or 20> --fold <0-9>

8. ReCon Zero-Shot

Zero-shot with the default configuration, run the script:

bash scripts/zeroshot.sh <GPU> <exp_name> <path/to/pre-trained/model>

9. ReCon Part Segmentation

Part segmentation on ShapeNetPart, run:

cd segmentation
bash seg.sh <GPU> <exp_name> <path/to/pre-trained/model>

or

cd segmentation
python main.py --ckpts <path/to/pre-trained/model> --log_dir <path/to/log/dir> --learning_rate 0.0001 --epoch 300

Test part segmentation on ShapeNetPart, run:

cd segmentation
bash test.sh <GPU> <exp_name> <path/to/best/fine-tuned/model>

10. ReCon Linear SVM

Linear SVM on ModelNet40, run:

sh scripts/svm.sh <GPU> <exp_name> <path/to/pre-trained/model> 

11. Visualization

We use PointVisualizaiton repo to render beautiful point cloud image, including specified color rendering and attention distribution rendering.

Contact

If you have any questions related to the code or the paper, feel free to email Zekun (qizekun@gmail.com) or Runpei (runpei.dong@gmail.com).

License

ReCon is released under MIT License. See the LICENSE file for more details. Besides, the licensing information for pointnet2 modules is available here.

Acknowledgements

This codebase is built upon Point-MAE, Point-BERT, CLIP, Pointnet2_PyTorch and ACT

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{qi2023recon,
  title={Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining},
  author={Qi, Zekun and Dong, Runpei and Fan, Guofan and Ge, Zheng and Zhang, Xiangyu and Ma, Kaisheng and Yi, Li},
  booktitle={International Conference on Machine Learning (ICML) },
  year={2023}
}

and closely related work ACT and ShapeLLM:

@inproceedings{dong2023act,
  title={Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?},
  author={Runpei Dong and Zekun Qi and Linfeng Zhang and Junbo Zhang and Jianjian Sun and Zheng Ge and Li Yi and Kaisheng Ma},
  booktitle={The Eleventh International Conference on Learning Representations (ICLR) },
  year={2023},
  url={https://openreview.net/forum?id=8Oun8ZUVe8N}
}
@inproceedings{qi2024shapellm,
  author = {Qi, Zekun and Dong, Runpei and Zhang, Shaochen and Geng, Haoran and Han, Chunrui and Ge, Zheng and Yi, Li and Ma, Kaisheng},
  title = {ShapeLLM: Universal 3D Object Understanding for Embodied Interaction},
  booktitle={European Conference on Computer Vision (ECCV) },
  year = {2024}
}