Home

Awesome

X-NeRF: Explicit Neural Radiance Field for Multi-Scene 360° Insufficient RGB-D Views

Accepted by WACV 2023. Check out our paper on arXiv.

<div align="center"> <img src="./assets/insufficient.jpg" width="80%"> </div>

Requirements

Installation

  1. Install PyTorch

  2. Install PyTorch Scatter

  3. Install MinkowskiEngine

  4. Install X-NeRF

pip install -e .

Quick Start

# training for single-scene
CUDA_VISIBLE_DEVICES=0 python scripts/train.py dataset=single_scene
# training for multi-scene
CUDA_VISIBLE_DEVICES=0 python scripts/train.py dataset=multi_scene
# eval
CUDA_VISIBLE_DEVICES=0 python scripts/eval.py dataset=multi_scene

Please refer to ./configs/ for more details.

Note that training X-NeRF may consume much GPU memory. We use an NVIDIA A100 for training. You can reduce the batch size if you meet with OOM. And we have not supported multi-gpu training yet.

Dataset

You can check our dataset in ./data/. The folder contains 10 scenes, each with 7 views. In our paper, we treat scene 1-6 as seen scenes and treat scene 7-10 as novel scenes to do zero-shot cross-scene evaluation. We use view 6 as novel view in all scenes. For more details about how to load and process the data, please refer to XNeRF_SingleScene.py.

Pre-trained Weights

You can download our pre-trained weight from Google Drive or Baidu Pan. To load the weight, you can set ckpt_path={path/to/weight} in the command.

TO DOs

Acknowledgement

The CUDA extension for rendering is adapted from DVGO and the SH function is adapted from PlenOctrees.