Awesome
Streamable Neural Fields
Paper link
Junwoo Cho*, Seungtae Nam*, Daniel Rho, Jong Hwan Ko, Eunbyung Park†<br> * Equal contribution, alphabetically ordered.<br> † Corresponding author.
European Conference on Computer Vision (ECCV), 2022
Overview
<img src = "https://user-images.githubusercontent.com/94037424/188373585-3ad09a56-9bc5-497c-8b11-65a0aa82b5fa.png" width="80%" height="80%">"Berliner Philharmoniker" © Stephan Rabold
0. Requirements
Setup a conda environment using commands below:
conda env create -f environment.yml
conda activate snf
1. Dataset
Download Kodak dataset from here.
Download UVG dataset from here.<br> When downloading UVG video, please use this version:<br>
- Resolution: 1080p<br>
- Bit depth: 8<br>
- Format: AVC<br>
- Container: MP4<br>
Download 3D point cloud dataset from here.
'data/' directory must be in your working directory. The structure is as follows:
Data layout
data/
kodak/
kodim01.png
...
kodim24.png
shape/
armadillo.xyz
dragon.xyz
happy_buddha.xyz
uvg/
Beauty.mp4
...
YachtRide.mp4
2. Reproducing experiments
Run the commands below.
Image spectral growing
bash scripts/train_image_spectral.sh
Image spatial growing
bash scripts/train_image_spatial.sh
Video temporal growing
bash scripts/train_video_temporal.sh
SDF spectral growing
bash scripts/train_sdf_spectral.sh
3. Results
You can find both qualitative and quantitative results in \results directory.
Citation
@inproceedings{cho2022streamable,
title={Streamable neural fields},
author={Cho, Junwoo and Nam, Seungtae and Rho, Daniel and Ko, Jong Hwan and Park, Eunbyung},
booktitle={European Conference on Computer Vision},
pages={595--612},
year={2022},
organization={Springer}
}