Awesome
PFGS: High Fidelity Point Cloud Rendering via Feature Splatting
<!-- ![issues](https://img.shields.io/github/issues/Mercerai/PFGS) ![forks](https://img.shields.io/github/forks/Mercerai/PFGS?style=flat&color=orange) ![stars](https://img.shields.io/github/stars/Mercerai/PFGS?style=flat&color=red) -->PFGS: High Fidelity Point Cloud Rendering via Feature Splatting
Jiaxu Wang<sup>†</sup>, Ziyi Zhang<sup>†</sup>, Junhao He, Renjing Xu*
ECCV 2024
If you found this project useful, please cite us in your paper, this is the greatest support for us.
Important Update at December 2024
!!!! UPDATE (1) We fixed a bug in rasterization which might cause an error when compiling for some specific machines. (2) Please directly install the diff-gaussian-rasterization function from this repo, which differs slightly from the original feature-gs. (3) If someone wants to obtain the complete dataset that we preprocess in our certain data structures, please email us, and we will send it to you.
Requirements (Tested on 1 * RTX3090)
- Linux
- Python == 3.8
- Pytorch == 1.13.0
- CUDA == 11.7
Installation
Install from environment.yml
You can directly install the requirements through:
$ conda env create -f environment.yml
Or install packages seperately
-
Create Environment
$ conda create --name PFGS python=3.8 $ conda activate PFGS
-
Pytorch (Please first check your cuda version)
$ conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 pytorch-cuda=11.7 -c pytorch -c nvidia
-
Other python packages: open3d, opencv-python, etc.
Gaussian Rasterization with High-dimensional Features
pip install ./submodules/diff-gaussian-rasterization
You can customize NUM_SEMANTIC_CHANNELS
in submodules/diff-gaussian-rasterization/cuda_rasterizer/config.h
for any number of feature dimensions that you want.
[Attention~] This rasterization is borrowed from Feature-3DGS but with some minor differences. Please directly install the rasterization from this repo.
Build third_party optionally
python build_pkg.py
Dataset
ScanNet:
-
Download and extract data from the original ScanNet-V2 preprocess.
-
Dataset structure:
── scannet └── scene0000_00 ├── pose └──1.txt ├── intrinsic └──*.txt ├── color └──1.jpg └── scene0000_00_vh_clean_2.ply └── images.txt └── scene0000_01
DTU:
- We reorganize the original datasets in our own format. Here we provide a demonstration of the test set of DTU, which can be downloaded here
- Pretrain
THuman2:
- Download 3D model and extract data from original THuman2.
- Render 36 views based on each 3D model and sparse sample points(8w) on the surface of the model by Blender.
- Demo and Pretrain
Train Stage 1
ScanNet:
python train_stage1.py --dataset scannet --scene_dir $data_path --exp_name scannet_stage1 --img_wh 640 512
DTU:
python train_stage1.py --dataset dtu --scene_dir $data_path --exp_name dtu_stage1 --img_wh 640 512
THuman2:
python train_stage1.py --dataset thuman2 --scene_dir $data_path --exp_name thuman2_stage1 --img_wh 512 512 --scale_max 0.0001
Train Stage 2
ScanNet:
python train_stage2.py --dataset scannet --scene_dir $data_path --exp_name scannet_stage2 --img_wh 640 512 --ckpt_stage1 $ckpt_stage1_path
DTU:
python train_stage2.py --dataset dtu --scene_dir $data_path --exp_name dtu_stage2 --img_wh 640 512 --ckpt_stage1 $ckpt_stage1_path
THuman2:
python train_stage1.py --dataset thuman2 --scene_dir $data_path --exp_name thuman2_stage1 --img_wh 512 512 --scale_max 0.0001 --ckpt_stage1 $ckpt_stage1_path
Eval
ScanNet:
python train_stage2.py --dataset scannet --scene_dir $data_path --exp_name scannet_stage2_eval --img_wh 640 512 --resume_path $ckpt_stage2_path --val_mode test
DTU:
python train_stage2.py --dataset dtu --scene_dir $data_path --exp_name dtu_stage2_eval --img_wh 640 512 --resume_path $ckpt_stage2_path --val_mode test
THuman2:
python train_stage1.py --dataset thuman2 --scene_dir $data_path --exp_name thuman2_stage1_eval --img_wh 512 512 --scale_max 0.0001 --resume_path $ckpt_stage2_path --val_mode test
The results will be saved in ./log/$exp_name
Acknowledgements
In this repository, we have used codes or datasets from the following repositories. We thank all the authors for sharing great codes or datasets.
Citation
@misc{wang2024pfgshighfidelitypoint,
title={PFGS: High Fidelity Point Cloud Rendering via Feature Splatting},
author={Jiaxu Wang and Ziyi Zhang and Junhao He and Renjing Xu},
year={2024},
eprint={2407.03857},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03857},
}