Awesome
Compressing Volumetric Radiance Fields to 1 MB
Paper
Update: π€ We update our compressed models in ModelScope, so you can test the models and render videos easily.
ζ们ε¨ιζδΈζ΄ζ°δΊεηΌ©εη樑εοΌζ΄ζΉδΎΏηζ―ζε¨ηΊΏζ΅θ―εζΈ²ζθ§ι’γ
Note: This repository only contain VQ-TensoRF.
VQ-DVGO please refer to VQRF.
Setup
-
Download datasets: NeRF, NSVF, T&T (masked), LLFF
-
Install required libraries, Please refer to TensoRF
Please install the correct version of Pytorch and torch_scatter for your machine.
Directory structure for the datasets
<!-- <details> <summary> (click to expand;) </summary> -->data
βββ nerf_synthetic # Link: https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1
β βββ [chair|drums|ficus|hotdog|lego|materials|mic|ship]
β βββ [train|val|test]
β β βββ r_*.png
β βββ transforms_[train|val|test].json
β
βββ Synthetic_NSVF # Link: https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NSVF.zip
β βββ [Bike|Lifestyle|Palace|Robot|Spaceship|Steamtrain|Toad|Wineholder]
β βββ intrinsics.txt
β βββ rgb
β β βββ [0_train|1_val|2_test]_*.png
β βββ pose
β βββ [0_train|1_val|2_test]_*.txt
β
βββ nerf_llff_data # Link: https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1
β βββ [fern|flower|fortress|horns|leaves|orchids|room|trex]
β Β Β βββ poses_bounds.npy
β Β Β βββ images
β
βββ TanksAndTemple # Link: https://dl.fbaipublicfiles.com/nsvf/dataset/TanksAndTemple.zip
βββ [Barn|Caterpillar|Family|Ignatius|Truck]
βββ intrinsics.txt
βββ rgb
β βββ [0|1|2]_*.png
βββ pose
βββ [0|1|2]_*.txt
<!-- </details> -->
Training & VectQuantize & Testing
The training script is in vectquant.py
.
For example, to train a VectQuantized model on the synthetic dataset:
python vectquant.py --config configs/vq/syn.txt --datadir {syn_dataset_dir}/hotdog --expname hotdog --basedir ./log_reimp/syn --render_path 0 --render_only 0 --ckpt ./log_reimp/syn/hotdog/hotdog.th
The process of the training script is divided into three steps:
- Step 1: Train a baseline model and save the model checkpoint. (Follow the vinilla TensoRF training pipeline)
- Step 2: Train a VectQuantized model with the baseline model checkpoint from Step 1 and save the VectQuantized model checkpoint.
- Step 3: Test the VectQuantized model checkpoint from Step 2.
More options refer to the opt.py
.
Autotask for a dataset
python autotask_vq.py -g "0 1 2 3" --dataset {dataset_name} --suffix v0
Modify your data
directory in DatasetSetting
.
Set dataset_name
, choices = ['syn', 'nsvf', 'tt', 'llff'].
Set -g
option according to the availible gpus on your machine.
Notice: When you run the autotask script with multiple gpus, maybe the bottleneck is the disk IO for data loading.
Testing the VectQuantized model
python eval_vq_only.py --autotask --config configs/vq/syn.txt --datadir {syn_dataset_dir} --ckpt {VQ_model_checkpoint}
Acknowledgements
In this repository, we have used codes from the following repositories.
Citation
If you find our work useful in your research, please consider citing:
@inproceedings{li2023compressing,
title={Compressing volumetric radiance fields to 1 mb},
author={Li, Lingzhi and Shen, Zhen and Wang, Zhongshu and Shen, Li and Bo, Liefeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4222--4231},
year={2023}
}