Home

Awesome

[ECCV'24] HAC

Official Pytorch implementation of HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression.

Compress 3D Gaussian Splatting for 75X without fidelity drop!

Yihang Chen, Qianyi Wu, Weiyao Lin, Mehrtash Harandi, Jianfei Cai

[Paper] [Arxiv] [Project] [Github]

Links

Welcome to check a series of works from our group on 3D radiance field representation compression as listed below:

Updates

πŸ”₯8-Aug-2024: HAC now utilizes a cuda-based codec instead of the original torchac, which significantly reduces the codec runtime by over 10 times compared to that reported in the paper!

Overview

<p align="left"> <img src="assets/teaser.png" width=80% height=80% class="center"> </p>

Our approach introduces a binary hash grid to establish continuous spatial consistencies, allowing us to unveil the inherent spatial relations of anchors through a carefully designed context model. To facilitate entropy coding, we utilize Gaussian distributions to accurately estimate the probability of each quantized attribute, where an adaptive quantization module is proposed to enable high-precision quantization of these attributes for improved fidelity restoration. Additionally, we incorporate an adaptive masking strategy to eliminate invalid Gaussians and anchors. Importantly, our work is the pioneer to explore context-based compression for 3DGS representation, resulting in a remarkable size reduction.

Performance

<p align="left"> <img src="assets/main_performance.png" width=80% height=80% class="center"> </p>

Installation

We tested our code on a server with Ubuntu 20.04.1, cuda 11.8, gcc 9.4.0

  1. Unzip files
cd submodules
unzip diff-gaussian-rasterization.zip
unzip gridencoder.zip
unzip simple-knn.zip
unzip arithmetic.zip
cd ..
  1. Install environment
conda env create --file environment.yml
conda activate HAC_env

Data

First, create a data/ folder inside the project path by

mkdir data

The data structure will be organised as follows:

data/
β”œβ”€β”€ dataset_name
β”‚Β Β  β”œβ”€β”€ scene1/
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ images
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ IMG_0.jpg
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ IMG_1.jpg
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ...
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ sparse/
β”‚Β Β  β”‚Β Β      └──0/
β”‚Β Β  β”œβ”€β”€ scene2/
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ images
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ IMG_0.jpg
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ IMG_1.jpg
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ...
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ sparse/
β”‚Β Β  β”‚Β Β      └──0/
...

Public Data (We follow suggestions from Scaffold-GS)

Custom Data

For custom data, you should process the image sequences with Colmap to obtain the SfM points and camera poses. Then, place the results into data/ folder.

Training

To train scenes, we provide the following training scripts:

run them with

python run_shell_xxx.py

The code will automatically run the entire process of: training, encoding, decoding, testing.

Contact

Citation

If you find our work helpful, please consider citing:

@inproceedings{hac2024,
  title={HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression},
  author={Chen, Yihang and Wu, Qianyi and Lin, Weiyao and Harandi, Mehrtash and Cai, Jianfei},
  booktitle={European Conference on Computer Vision},
  year={2024}
}

LICENSE

Please follow the LICENSE of 3D-GS.

Acknowledgement