Home

Awesome

ngp_pl

Advertisement: Check out the latest integrated project nerfstudio! There are a lot of recent improvements on nerf related methods, including instant-ngp!

<!-- ### Update 2022 July 29th: GUI prototype is available now (see following video)! ### Update 2022 July 24th: Training on custom data is possible now! ### Update 2022 July 14th: Multi-GPU training is available now! With multiple GPUs, now you can achieve high quality under a minute! -->

Instant-ngp (only NeRF) in pytorch+cuda trained with pytorch-lightning (high quality with high speed). This repo aims at providing a concise pytorch interface to facilitate future research, and am grateful if you can share it (and a citation is highly appreciated)!

:paintbrush: Gallery

https://user-images.githubusercontent.com/11364490/181671484-d5e154c8-6cea-4d52-94b5-1e5dd92955f2.mp4

Other representative videos are in GALLERY.md

:computer: Installation

This implementation has strict requirements due to dependencies on other libraries, if you encounter installation problem due to hardware/software mismatch, I'm afraid there is no intention to support different platforms (you are welcomed to contribute).

Hardware

Software

:books: Supported Datasets

  1. NSVF data

Download preprocessed datasets (Synthetic_NeRF, Synthetic_NSVF, BlendedMVS, TanksAndTemples) from NSVF. Do not change the folder names since there is some hard-coded fix in my dataloader.

  1. NeRF++ data

Download data from here.

  1. Colmap data

For custom data, run colmap and get a folder sparse/0 under which there are cameras.bin, images.bin and points3D.bin. The following data with colmap format are also supported:

  1. RTMV data

Download data from here. To convert the hdr images into ldr images for training, run python misc/prepare_rtmv.py <path/to/RTMV>, it will create images/ folder under each scene folder, and will use these images to train (and test).

:key: Training

Quickstart: python train.py --root_dir <path/to/lego> --exp_name Lego

It will train the Lego scene for 30k steps (each step with 8192 rays), and perform one testing at the end. The training process should finish within about 5 minutes (saving testing image is slow, add --no_save_test to disable). Testing PSNR will be shown at the end.

More options can be found in opt.py.

For other public dataset training, please refer to the scripts under benchmarking.

:mag_right: Testing

Use test.ipynb to generate images. Lego pretrained model is available here

GUI usage: run python show_gui.py followed by the same hyperparameters used in training (dataset_name, root_dir, etc) and add the checkpoint path with --ckpt_path <path/to/.ckpt>

Comparison with torch-ngp and the paper

I compared the quality (average testing PSNR on Synthetic-NeRF) and the inference speed (on Lego scene) v.s. the concurrent work torch-ngp (default settings) and the paper, all trained for about 5 minutes:

Methodavg PSNRFPSGPU
torch-ngp31.4618.22080 Ti
mine32.9636.22080 Ti
instant-ngp paper33.18603090

As for quality, mine is slightly better than torch-ngp, but the result might fluctuate across different runs.

As for speed, mine is faster than torch-ngp, but is still only half fast as instant-ngp. Speed is dependent on the scene (if most of the scene is empty, speed will be faster).

<p align="center"> <img src="https://user-images.githubusercontent.com/11364490/176800109-38eb35f3-e145-4a09-8304-1795e3a4e8cd.png", width="45%"> <img src="https://user-images.githubusercontent.com/11364490/176800106-fead794f-7e70-4459-b99e-82725fe6777e.png", width="45%"> <br> <img src="https://user-images.githubusercontent.com/11364490/180444355-444676cf-2af2-49ad-9fe2-16eb1e6c4ef1.png", width="45%"> <img src="https://user-images.githubusercontent.com/11364490/180444337-3df9f245-f7eb-453f-902b-0cb9dae60144.png", width="45%"> <br> <sup>Left: torch-ngp. Right: mine.</sup> </p>

:chart: Benchmarks

To run benchmarks, use the scripts under benchmarking.

Followings are my results trained using 1 RTX 2080 Ti (qualitative results here):

<details> <summary>Synthetic-NeRF</summary>
MicFicusChairHotdogMaterialsDrumsShipLegoAVG
PSNR35.5934.1335.2837.3529.4625.8130.3235.7632.96
SSIM0.9880.9820.9840.9800.9440.9330.8900.9790.960
LPIPS0.0170.0240.0250.0380.0700.0760.1330.0220.051
FPS40.8134.0249.8025.0620.0837.7715.7736.2032.44
Training time3m9s3m12s4m17s5m53s4m55s4m7s9m20s5m5s5m00s
</details> <details> <summary>Synthetic-NSVF</summary>
WineholderSteamtrainToadRobotBikePalaceSpaceshipLifestyleAVG
PSNR31.6436.4735.5737.1037.8737.4135.5834.7635.80
SSIM0.9620.9870.9800.9940.9900.9770.9800.9670.980
LPIPS0.0470.0230.0240.0100.0150.0210.0290.0440.027
FPS47.0775.1750.4264.8766.8828.6235.5522.8448.93
Training time3m58s3m44s7m22s3m25s3m11s6m45s3m25s4m56s4m36s
</details> <details> <summary>Tanks and Temples</summary>
IgnatiusTruckBarnCaterpillarFamilyAVG
PSNR28.3027.6728.0026.1634.2728.78
*FPS10.047.9916.1410.916.1610.25

*Evaluated on test-traj

</details> <details> <summary>BlendedMVS</summary>
*Jade*FountainCharacterStatuesAVG
PSNR25.4326.8230.4326.7927.38
**FPS26.0221.2435.9919.2225.61
Training time6m31s7m15s4m50s5m57s6m48s

*I manually switch the background from black to white, so the number isn't directly comparable to that in the papers.

**Evaluated on test-traj

</details>

TODO