Home

Awesome

<span style="color:lightblue">[CVPR2024]</span> GPT4Point<a> <img src="./readme_figs/icon.png" width="30" /> </a>: A Unified Framework for Point-Language Understanding and Generation

<p align="center"> <a href="http://arxiv.org/abs/2312.02980" target='_**blank**'> <img src="https://img.shields.io/badge/arXiv paper-2312.02980šŸ“–-blue?"> </a> <a href="https://gpt4point.github.io/" target='_blank'> <img src="https://img.shields.io/badge/Project-&#x1F680-blue"> </a> <a href="https://gpt4point.github.io/" target='_blank'> <img src="https://img.shields.io/badge/version-v1.0-green"> </a> </p>

šŸ”„ News

šŸ”„ 2024/04/27: We have modified the point encoder section, and now evaluation is more functional, although the training section still needs modification.

šŸ”„ 2024/04/13: We release the GPT4Point <span style="color:red">v1.0</span>, including training and 3D captioning evluation code.

šŸ”„ 2024/04/05: Our paper GPT4Point is selected as CVPR'24 Highlight 2.84% (324/11532) !

šŸ”„ 2024/02/27: Our paper GPT4Point is accepted by CVPR'24!

šŸ”„ 2024/01/19: We release the Objaverse-XL (Point Cloud Format) Download and Extraction way.

šŸ”„ 2023/12/05: The paper GPT4Point (arxiv) has been released, we unified the Point-language Understanding and Generation.

šŸ”„ 2023/08/13: Two-stage Pre-training code of PointBLIP has been released.

šŸ”„ 2023/08/13: Part of datasets used and result files has been uploaded.

šŸ  Overview

<p align="center"> <a> <img src="./readme_figs/fig1_teaser.png" width="1000" /> </a> </p>

This project presents GPT4Point<a> <img src="./readme_figs/icon.png" width="20" /> </a>, a 3D multi-modality model that aligns 3D point clouds with language. More details are shown in project page.

šŸ§­ Version

šŸ”§ Installation

  1. (Optional) Creating conda environment
conda create -n gpt4point python=3.8
conda activate gpt4point
  1. install from PyPI
pip install salesforce-lavis
  1. Or, for development, you may build from source
git clone https://github.com/salesforce/LAVIS.git
cd LAVIS
pip install -e .

šŸ“¦ Data Preparation

  1. Annotations: All annotations will be downloaded automaticly through hugging_face.

  2. Point Cloud: You can download the Cap3D point cloud dataset through the Google Drive Link. You should unzip these 10 tar.gz files and then put them together. and the all folder strucure is:

GPT4Point
ā”œā”€ā”€ data
ā”‚   ā”œā”€ā”€ cap3d
ā”‚   ā”‚   ā”œā”€ā”€ points
ā”‚   ā”‚   ā”‚    ā”œā”€ā”€ Cap3D_pcs_8192_xyz_w_color
ā”‚   ā”‚   ā”‚    ā”‚    ā”œā”€ā”€ <point cloud id>.pkl
ā”‚   ā”‚   ā”‚    ā”‚    ā”œā”€ā”€ ...
ā”‚   ā”‚   ā”‚    ā”‚    ā”œā”€ā”€ <point cloud id>.pkl
ā”‚   ā”‚   ā”œā”€ā”€ annotations
ā”‚   ā”‚   ā”‚    ā”œā”€ā”€ cap3d_caption_train.json
ā”‚   ā”‚   ā”‚    ā”œā”€ā”€ cap3d_caption_val.json
ā”‚   ā”‚   ā”‚    ā”œā”€ā”€ cap3d_real_and_chatgpt_caption_test.json
ā”‚   ā”‚   ā”‚    ā”œā”€ā”€ cap3d_real_and_chatgpt_caption_test_gt.json (for evaluation)

šŸš† Training

  1. For stage 1 training:
python -m torch.distributed.run --master_port=32339 --nproc_per_node=4 train.py --cfg-path lavis/projects/gpt4point/train/pretrain_stage1_cap3d.yaml
  1. For stage 2 training:
python -m torch.distributed.run --master_port=32339 --nproc_per_node=4 train.py --cfg-path lavis/projects/gpt4point/train/pretrain_stage2_cap3d_opt2.7b.yaml

šŸ Evaluation

python -m torch.distributed.run --master_port=32239 --nproc_per_node=1 evaluate.py --cfg-path lavis/projects/gpt4point/eval/captioning3d_cap3d_opt2.7b_eval.yaml

šŸ“¦ Point Dataset and Data Annotation Engine (Optional)

Objaverse-XL Point Dataset Download Way

Note that you should cd in the Objaverse-xl_Download directory.

cd ./Objaverse-xl_Download

Then please see the folder Objaverse-xl_Download for details.

Objaverse-XL Point Cloud Data Generation

Please see the Extract_Pointcloud for details.

šŸ“ TODO List

Dataset and Data Engine

šŸ”— Citation

If you find our work helpful, please cite:

@inproceedings{GPT4Point,
  title={GPT4Point: A Unified Framework for Point-Language Understanding and Generation},
  author={Zhangyang Qi and Ye Fang and Zeyi Sun and Xiaoyang Wu and Tong Wu and Jiaqi Wang and Dahua Lin and Hengshuang Zhao},
  booktitle={CVPR},
  year={2024},
}

šŸ“„ License

<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/80x15.png" /></a> <br /> This work is under the <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.

šŸ“š Related Work

Together, Let's make LLM for 3D great!