Home

Awesome

nerf_pl

Update: NVIDIA open-sourced a lightning-fast version of NeRF: NGP. I re-implemented in pytorch here. This version is ~100x faster than this repo with also better quality!

Update: an improved NSFF implementation to handle dynamic scene is open!

Update: NeRF-W (NeRF in the Wild) implementation is added to nerfw branch!

Update: The lastest code (using the latest libraries) will be updated to dev branch. The master branch remains to support the colab files. If you don't use colab, it is recommended to switch to dev branch. Only issues of the dev and nerfw branch will be considered currently.

:gem: Project page (live demo!)

Unofficial implementation of NeRF (Neural Radiance Fields) using pytorch (pytorch-lightning). This repo doesn't aim at reproducibility, but aim at providing a simpler and faster training procedure (also simpler code with detailed comments to help to understand the work). Moreover, I try to extend much more opportunities by integrating this algorithm into game engine like Unity.

Official implementation: nerf .. Reference pytorch implementation: nerf-pytorch

Recommend to read: A detailed NeRF extension list: awesome-NeRF

:milky_way: Features

You can find the Unity project including mesh, mixed reality and volume rendering here! See README_Unity for generating your own data for Unity rendering!

:beginner: Tutorial

What can NeRF do?

<img src="https://user-images.githubusercontent.com/11364490/82124460-1ccbbb80-97da-11ea-88ad-25e22868a5c1.png" style="max-width:100%">

Tutorial videos

<a href="https://www.youtube.com/playlist?list=PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV"> <img src="https://user-images.githubusercontent.com/11364490/80913471-d5781080-8d7f-11ea-9f72-9d68402b8271.png"> </a>

:computer: Installation

Hardware

Software

:key: Training

Please see each subsection for training on different datasets. Available training datasets:

Blender

<details> <summary>Steps</summary>

Data download

Download nerf_synthetic.zip from here

Training model

Run (example)

python train.py \
   --dataset_name blender \
   --root_dir $BLENDER_DIR \
   --N_importance 64 --img_wh 400 400 --noise_std 0 \
   --num_epochs 16 --batch_size 1024 \
   --optimizer adam --lr 5e-4 \
   --lr_scheduler steplr --decay_step 2 4 8 --decay_gamma 0.5 \
   --exp_name exp

These parameters are chosen to best mimic the training settings in the original repo. See opt.py for all configurations.

NOTE: the above configuration doesn't work for some scenes like drums, ship. In that case, consider increasing the batch_size or change the optimizer to radam. I managed to train on all scenes with these modifications.

You can monitor the training process by tensorboard --logdir logs/ and go to localhost:6006 in your browser.

</details>

LLFF

<details> <summary>Steps</summary>

Data download

Download nerf_llff_data.zip from here

Training model

Run (example)

python train.py \
   --dataset_name llff \
   --root_dir $LLFF_DIR \
   --N_importance 64 --img_wh 504 378 \
   --num_epochs 30 --batch_size 1024 \
   --optimizer adam --lr 5e-4 \
   --lr_scheduler steplr --decay_step 10 20 --decay_gamma 0.5 \
   --exp_name exp

These parameters are chosen to best mimic the training settings in the original repo. See opt.py for all configurations.

You can monitor the training process by tensorboard --logdir logs/ and go to localhost:6006 in your browser.

</details>

Your own data

<details> <summary>Steps</summary>
  1. Install COLMAP following installation guide
  2. Prepare your images in a folder (around 20 to 30 for forward facing, and 40 to 50 for 360 inward-facing)
  3. Clone LLFF and run python img2poses.py $your-images-folder
  4. Train the model using the same command as in LLFF. If the scene is captured in a 360 inward-facing manner, add --spheric argument.

For more details of training a good model, please see the video here.

</details>

Pretrained models and logs

Download the pretrained models and training logs in release.

Comparison with other repos

training GPU memory in GBSpeed (1 step)
Original8.50.177s
Ref pytorch6.00.147s
This repo3.20.12s

The speed is measured on 1 RTX2080Ti. Detailed profile can be found in release. Training memory is largely reduced, since the original repo loads the whole data to GPU at the beginning, while we only pass batches to GPU every step.

:mag_right: Testing

See test.ipynb for a simple view synthesis and depth prediction on 1 image.

Use eval.py to create the whole sequence of moving views. E.g.

python eval.py \
   --root_dir $BLENDER \
   --dataset_name blender --scene_name lego \
   --img_wh 400 400 --N_importance 64 --ckpt_path $CKPT_PATH

IMPORTANT : Don't forget to add --spheric_poses if the model is trained under --spheric setting!

It will create folder results/{dataset_name}/{scene_name} and run inference on all test data, finally create a gif out of them.

Example of lego scene using pretrained model and the reconstructed colored mesh: (PSNR=31.39, paper=32.54)

<p> <img src="https://user-images.githubusercontent.com/11364490/79932648-f8a1e680-8488-11ea-98fe-c11ec22fc8a1.gif" width="200"> <img src="https://user-images.githubusercontent.com/11364490/80813179-822d8300-8c04-11ea-84e6-142f04714c58.png" width="200"> </p>

Example of fern scene using pretrained model:

fern

Example of own scene (Silica GGO figure) and the reconstructed colored mesh. Click to link to youtube video.

<p> <a href="https://youtu.be/yH1ZBcdNsUY"> <img src="https://user-images.githubusercontent.com/11364490/80279695-324d4880-873a-11ea-961a-d6350e149ece.gif" height="252"> </a> <img src="https://user-images.githubusercontent.com/11364490/80813184-83f74680-8c04-11ea-8606-40580f753355.png" height="252"> </p>

Portable scenes

The concept of NeRF is that the whole scene is compressed into a NeRF model, then we can render from any pose we want. To render from plausible poses, we can leverage the training poses; therefore, you can generate video with only the trained model and the poses (hence the name of portable scenes). I provided my silica model in release, feel free to play around with it!

If you trained some interesting scenes, you are also welcomed to share the model (and the poses_bounds.npy) by sending me an email, or post in issues! After all, a model is just around 5MB! Please run python utils/save_weights_only.py --ckpt_path $YOUR_MODEL_PATH to extract the final model.

:ribbon: Mesh

See README_mesh for reconstruction of colored mesh. Only supported for blender dataset and 360 inward-facing data!

:warning: Notes on differences with the original repo

:mortar_board: COLAB

I also prepared colab notebooks that allow you to run the algorithm on any machine without GPU requirement.

Please see this playlist for the detailed tutorials.

:jack_o_lantern: SHOWOFF

We can incorporate ray tracing techniques into the volume rendering pipeline, and realize realistic scene editing (following is the materials scene with an object removed, and a mesh is inserted and rendered with ray tracing). The code will not be released.

add add2

With my integration in Unity, I can realize realistic mixed reality photos (note my character casts shadow on the scene, zero post- image editing required): defer defer2 BTW, I would like to visit the museum one day...

:book: Citation

If you use (part of) my code or find my work helpful, please consider citing

@misc{queianchen_nerf,
  author={Quei-An, Chen},
  title={Nerf_pl: a pytorch-lightning implementation of NeRF},
  url={https://github.com/kwea123/nerf_pl/},
  year={2020},
}