Home

Awesome

đŸ’¥Instant Uncertainty Calibration of NeRFs Using a Meta-Calibrator (ECCV 2024)

Niki Amini-Naieni, Tomas Jakab, Andrea Vedaldi, Ronald Clark

Official PyTorch implementation for Instant Uncertainty Calibration of NeRFs Using a Meta-Calibrator. Details can be found in the paper, [Paper] [Project page].

<img src=img/teaser.png width="50%"/>

Contents

Preparation

1. Clone Repository

git clone https://github.com/niki-amini-naieni/instantcalibration.git

2. Download Dataset

Please use this download link for downloading the LLFF dataset from the NeRF repository. Unzip the dataset folder (named llff-data.zip) into the instantcalibration folder, so that your directory looks like the one below.

instantcalibration
--> data
-->--> nerf_llff_data
-->-->--> fern
-->-->--> flower
-->-->--> fortress
...

3. Set Up Anaconda Environment:

The following commands will create a suitable Anaconda environment for running the code. To produce the results here, we used Anaconda version 2022.10.

conda create -n instantcalibration python=3.7
conda activate instantcalibration
cd instantcalibration
pip install jax==0.2.16
pip install jaxlib==0.1.68+cuda110 -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
pip install -r requirements.txt
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python

4. Download Pre-Trained Weights

Construct Calibration Curves [Optional]

To train the meta-calibrator, we used calibration data from 30 scenes. We provide this data in the scenes folder. We provide example code and instructions on how to generate data for the LLFF scenes below. The calibration curves constructed with this code may not exactly match the ones in the scenes folder due to the fact that the calibration data for some scenes was sub-sampled for efficiency purposes.

Train Meta-Calibrator

Calibrate Uncertainty and Calculate Final Metrics

To get the final metrics in the paper, for each scene you would like to test, run the following code, replacing [scene_name] with the name of the scene.

python get_final_metrics.py --scene [scene_name] --gin_config configs/llff_[scene_name].gin --test_model_dir checkpoints/llff3/[scene_name]

Below, we have provided a table showing the results for each scene that you can compare to. We reproduced the results from the main paper below after refactoring our code. These results are slightly different from the ones in the main paper because of the random sampling in the meta-calibrator training code. You will likely find slightly different (but very close) results to the ones below and the ones in the main paper if you have followed all previous steps correctly.

ScenePSNRLPIPSCal Err (Uncal) RGB Avg.Cal Err (Meta-Cal) RGB Avg.NLL (Uncal)NLL (Meta-Cal)
Flower20.250.2160.01120.0012-0.10-0.09
Room20.190.2220.02450.01400.520.12
Orchids16.100.2250.00700.0004-0.23-0.44
Trex20.390.1850.00970.0005-0.33-0.48
Leaves16.190.2170.00240.0042-0.72-0.68
Horns17.790.3040.00850.0017-0.59-0.84
Fortress23.190.2360.00130.0039-1.30-1.33
Fern20.590.2770.00410.0004-1.34-1.39
Average19.380.2350.00860.0033-0.51-0.64

Citation

@inproceedings{AminiNaieni24,
    title={Instant Uncertainty Calibration of {NeRFs} Using a Meta-Calibrator},
    author={Niki Amini-Naieni and Tomas Jakab and Andrea Vedaldi and Ronald Clark},
    booktitle={ECCV},
    year={2024}
}

Acknowledgements

This repository uses code from the FlipNeRF repository.