Home

Awesome

VastGaussian

This is Chinese Version.

img.png

This is VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction unofficial implementation, since this is my first time to recreate the complete code from scratch, the code may have some errors, and the code writing may seem a bit naive compared to some experts. Lack of engineering skills. But I got my foot in the door. I couldn't find any implementation of VastGaussian on the web, so I gave it a try.

If you have any experiences and feedback on any code changes, feel free to contact me, or simply raise an Issue :grinning::

Email: 374774222@qq.com

WeChat: k374774222

<a style="color: red">I'm trying to refactor this repository, and the current code looks a bit messy and disorganized</a>

ToDo List

Some notes

  1. I made some changes to the original 3DGS. First of all, I took the hyperparameters of 3DGS from arguments/__init__.py and put them into arguments/parameters.py file to make it easier to read and understand the hyperparameters
  2. In order not to change the original directory structure of 3DGS, I added a new VastGaussian_scene module to store VastGaussian. Part of the code I called the existing functions in the scene folder. Also to fix the import error, I moved the Scene class into the datasets.py folder
<div align="center"> <img src=image/img2.png align="center"> <img src=image/img_1.png align="center"> </div>
  1. The naming of the files is consistent with the method mentioned in the paper for easy reading
<div align="center"> <img src=image/img_3.png width=800> </div>
  1. I have added a new file train_vast.py to modify the process of training VastGaussian, if you want to train the original 3DGS, please use train.py.
  2. The paper mentioned Manhattan world alignment, so that the Y-axis of the world coordinate is perpendicular to the ground plane, I asked the experts to know that this thing can be adjusted manually using threejs: https://threejs.org/editor/ or the software cloudcompare, after manually adjusting the scene you get the --position and --rotation parameters, just take them as command line arguments and train.

1. Using threejs for Manhattan alignment

<div align="center"> <img src=image/img_7.png align="center" width=600> </div> - Now you can adjust your point cloud so that the ground is perpendicular to the y-axis and the boundaries are as parallel as possible to the x- and z-axis with the help of the options on the left, but of course you can also use the editing area on the right to directly enter the corresponding values. <div align="center"> <img src=image/img_8.png height=400> <img src=image/img_9.png height=700> </div> - Then you can get the appropriate parameters in the right edit area.

2. Using cloudcompare for Manhattan alignment

<div align="center"> <img src="image/img_6.png" width="800"> </div> <div align="center"> <img src="image/img_10.png" width="800"> <img src="image/img_11.png" width="800"> </div> <div align="center"> <img src="image/img_12.png" width="800"> <img src="image/img_13.png" width="800"> </div> <div align="center"> <img src="image/img_14.png" width="800"> <img src="image/img_15.png" width="800"> </div> <div align="center"> <img src="image/img_16.png" width="800"> <img src="image/img_17.png" width="800"> <img src="image/img_18.png" width="800"> <img src="image/img_18.png" width="800"> <img src="image/img_19.png" width="800"> </div> <div align="center"> <img src="image/img_20.png" width="800"> </div>
  1. In the process of implementation, I used a small range of data provided by 3DGS for testing. Larger data can not run on the native computer, and a large range of data requires at least 32G video memory according to the instructions of the paper.
  2. In the implementation process, some operations in the paper, the author is not very clear about the details, so some implementation is based on my guess and understanding to complete, so my implementation may have some bugs, and some implementation may be a little stupid in the eyes of the expert, if you find problems in the use of the process, please contact me in time, progress together.

Using

  1. The data format is the same as 3DGS, and the training command is basically the same as 3DGS. I didn't make too many personalized changes, you can refer to the following command (see arguments/parameters.py for more parameters): if you want to perform manhattan alignment:

Train your own dataset

Using threejs for Manhattan alignment

python train_vast.py -s datasets/xxx --exp_name xxx --manhattan --plantform threejs --pos xx xx xx --rot xx xx xx --num_gpus 1

Using cloudcompare for Manhattan alignment

# The 9 elements of the rotation matrix should be filled in rot
python train_vast.py -s datasets/xxx --exp_name xxx --manhattan --plantform cloudcompare --pos xx xx xx --rot xx xx xx xx xx xx xx xx xx --num_gpus 1

Train without Manhattan alignment:

python train_vast.py -s datasets/xxx --exp_name test

Train Mill-19 and Urbanscene3D

I get the preprocessed data from https://vastgaussian.github.io/, and implement Manhattan alignment, you can use my pos and rot params.

# train rubble
python train_vast.py -s ../datasets/Mill19/rubble --exp_name rubble --manhattan --pos 25.607364654541 0.000000000000 -12.012700080872 --rot 0.923032462597 0.000000000000 0.384722054005 0.000000000000 1.000000000000 0.000000000000 -0.384722054005 0.000000000000 0.923032462597 --num_gpus 2

# train building
python train_vast.py -s ../datasets/Mill19/building --exp_name building --manhattan --pos -62.527942657471 0.000000000000 -15.786898612976 --rot 0.932374119759 0.000000000000 0.361494839191 0.000000000000 1.000000000000 0.000000000000 -0.361494839191 0.000000000000 0.932374119759 --num_gpus 2

Additional Parameter

I added new parameters in arguments/parameters.py

<details> <summary><span style="font-weight: bold;">New Parameters for train_vast.py</span></summary>

--exp_name

Experiment name

--manhattan

store_true, Whether to perform Manhattan alignment

--plantform

Platform for Manhattan alignment, choose in "cloudcompare" and "threejs"

--pos

Translation vector

--rot

rotate matrix

--man_trans

default=None, transformational matrix

--m_region

the number of regions in the x direction

--n_region

the number of regions in the z direction

--extend_rate

The rate of boundary expansion

--visible_rate

Airspace-aware visibility rate

--num_gpus

default=1, if =1 train model on 1 GPU, if =n train model on n GPUs

</details>

Datasets

  1. Urbanscene3D: https://github.com/Linxius/UrbanScene3D

  2. Mill-19: https://opendatalab.com/OpenDataLab/Mill_19/tree/main/raw

https://vastgaussian.github.io/ have uploaded the pre-processed data for Urbanscene3D and Mill-19

  1. test data for this implementation: https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip

Contributors

Happily, we now have several contributors working on the project, and we welcome more contributors to join us to improve the project. Thank you all for your work.

<a href="https://github.com/VerseWei"> <img src="https://avatars.githubusercontent.com/u/102359772?v=4" height="75" width="75"/> </a> <a href="https://github.com/Livioni"> <img src="https://avatars.githubusercontent.com/u/52649461?v=4" height="75" width="75"/> </a>