Home

Awesome

<p align="center"> <!-- community badges --> <a href="https://discord.gg/uMbNqcraFc"><img src="https://img.shields.io/badge/Join-Discord-blue.svg"/></a> <!-- doc badges --> <a href='https://plenoptix-nerfstudio.readthedocs-hosted.com/en/latest/?badge=latest'> <img src='https://readthedocs.com/projects/plenoptix-nerfstudio/badge/?version=latest' alt='Documentation Status' /> </a> <!-- pi package badge --> <a href="https://badge.fury.io/py/nerfstudio"><img src="https://badge.fury.io/py/nerfstudio.svg" alt="PyPI version"></a> <!-- code check badges --> <a href='https://github.com/nerfstudio-project/nerfstudio/actions/workflows/core_code_checks.yml'> <img src='https://github.com/nerfstudio-project/nerfstudio/actions/workflows/core_code_checks.yml/badge.svg' alt='Test Status' /> </a> <a href='https://github.com/nerfstudio-project/nerfstudio/actions/workflows/viewer_build_deploy.yml'> <img src='https://github.com/nerfstudio-project/nerfstudio/actions/workflows/viewer_build_deploy.yml/badge.svg' alt='Viewer build Status' /> </a> <!-- license badge --> <a href="https://github.com/nerfstudio-project/nerfstudio/blob/master/LICENSE"> <img alt="License" src="https://img.shields.io/badge/License-Apache_2.0-blue.svg"> </a> </p> <p align="center"> <!-- pypi-strip --> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://docs.nerf.studio/en/latest/_images/logo-dark.png"> <source media="(prefers-color-scheme: light)" srcset="https://docs.nerf.studio/en/latest/_images/logo.png"> <!-- /pypi-strip --> <img alt="nerfstudio" src="https://docs.nerf.studio/en/latest/_images/logo.png" width="400"> <!-- pypi-strip --> </picture> <!-- /pypi-strip --> </p> <!-- Use this for pypi package (and disable above). Hacky workaround --> <!-- <p align="center"> <img alt="nerfstudio" src="https://docs.nerf.studio/en/latest/_images/logo.png" width="400"> </p> --> <p align="center"> A collaboration friendly studio for NeRFs </p> <p align="center"> <a href="https://docs.nerf.studio"> <img alt="documentation" src="https://user-images.githubusercontent.com/3310961/194022638-b591ce16-76e3-4ba6-9d70-3be252b36084.png" width="150"> </a> <a href="https://viewer.nerf.studio/"> <img alt="viewer" src="https://user-images.githubusercontent.com/3310961/194022636-a9efb85a-14fd-4002-8ed4-4ca434898b5a.png" width="150"> </a> <a href="https://colab.research.google.com/github/nerfstudio-project/nerfstudio/blob/main/colab/demo.ipynb"> <img alt="colab" src="https://raw.githubusercontent.com/nerfstudio-project/nerfstudio/main/docs/_static/imgs/readme_colab.png" width="150"> </a> </p>

<img src="https://user-images.githubusercontent.com/3310961/194017985-ade69503-9d68-46a2-b518-2db1a012f090.gif" width="52%"/> <img src="https://user-images.githubusercontent.com/3310961/194020648-7e5f380c-15ca-461d-8c1c-20beb586defe.gif" width="46%"/>

About

Nerfstudio provides a simple API that allows for a simplified end-to-end process of creating, training, and testing NeRFs. The library supports a more interpretable implementation of NeRFs by modularizing each component. With more modular NeRFs, we hope to create a more user-friendly experience in exploring the technology. Nerfstudio is a contributor-friendly repo with the goal of building a community where users can more easily build upon each other's contributions.

It’s as simple as plug and play with nerfstudio!

We are committed to providing learning resources to help you understand the basics of (if you're just getting started), and keep up-to-date with (if you're a seasoned veteran) all things NeRF. As researchers, we know just how hard it is to get onboarded with this next-gen technology. So we're here to help with tutorials, documentation, and more!

Have feature requests? Want to add your brand-spankin'-new NeRF model? Have a new dataset? We welcome any and all contributions! Please do not hesitate to reach out to the nerfstudio team with any questions via Discord.

We hope nerfstudio enables you to build faster :hammer: learn together :books: and contribute to our NeRF community :sparkling_heart:.

Quickstart

The quickstart will help you get started with the default vanilla NeRF trained on the classic Blender Lego scene. For more complex changes (e.g., running with your own data/setting up a new NeRF graph, please refer to our references.

1. Installation: Setup the environment

Prerequisites

CUDA must be installed on the system. This library has been tested with version 11.3. You can find more information about installing CUDA here

Create environment

Nerfstudio requires python >= 3.7. We recommend using conda to manage dependencies. Make sure to install Conda before proceeding.

conda create --name nerfstudio -y python=3.8
conda activate nerfstudio
python -m pip install --upgrade pip

Dependencies

Install pytorch with CUDA (this repo has been tested with CUDA 11.3) and tiny-cuda-nn

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Installing nerfstudio

Easy option:

pip install nerfstudio

OR if you want the latest and greatest:

git clone https://github.com/nerfstudio-project/nerfstudio.git
cd nerfstudio
pip install --upgrade pip setuptools
pip install -e .

OR if you want to skip all installation steps and directly start using nerfstudio, use the docker image:

See Installation - Use docker image.

2. Training your first model!

The following will train a nerfacto model, our recommended model for real world scenes.

# Download some test data:
ns-download-data nerfstudio --capture-name=poster
# Train model
ns-train nerfacto --data data/nerfstudio/poster

If everything works, you should see training progress like the following:

<p align="center"> <img width="800" alt="image" src="https://user-images.githubusercontent.com/3310961/202766069-cadfd34f-8833-4156-88b7-ad406d688fc0.png"> </p>

Navigating to the link at the end of the terminal will load the webviewer. If you are running on a remote machine, you will need to port forward the websocket port (defaults to 7007).

<p align="center"> <img width="800" alt="image" src="https://user-images.githubusercontent.com/3310961/202766653-586a0daa-466b-4140-a136-6b02f2ce2c54.png"> </p>

Resume from checkpoint / visualize existing run

It is possible to load a pretrained model by running

ns-train nerfacto --data data/nerfstudio/poster --trainer.load-dir {outputs/.../nerfstudio_models}

This will automatically start training. If you do not want it to train, add --viewer.start-train False to your training command.

3. Exporting Results

Once you have a NeRF model you can either render out a video or export a point cloud.

Render Video

First we must create a path for the camera to follow. This can be done in the viewer under the "RENDER" tab. Orient your 3D view to the location where you wish the video to start, then press "ADD CAMERA". This will set the first camera key frame. Continue to new viewpoints adding additional cameras to create the camera path. We provide other parameters to further refine your camera path. Once satisfied, press "RENDER" which will display a modal that contains the command needed to render the video. Kill the training job (or create a new terminal if you have lots of compute) and the command to generate the video.

Other video export options are available, learn more by running,

ns-render --help

Generate Point Cloud

While NeRF models are not designed to generate point clouds, it is still possible. Navigate to the "EXPORT" tab in the 3D viewer and select "POINT CLOUD". If the crop option is selected, everything in the yellow square will be exported into a point cloud. Modify the settings as desired then run the command at the bottom of the panel in your command line.

Alternatively you can use the CLI without the viewer. Learn about the export options by running,

ns-export pointcloud --help

4. Using Custom Data

Using an existing dataset is great, but likely you want to use your own data! We support various methods for using your own data. Before it can be used in nerfstudio, the camera location and orientations must be determined and then converted into our format using ns-process-data. We rely on external tools for this, instructions and information can be found in the documentation.

DataRequirementsPreprocessing Speed
πŸ“· ImagesCOLMAP🐒
πŸ“Ή VideoCOLMAP🐒
πŸ“± PolycamLiDAR iOS DeviceπŸ‡
πŸ“± Record3DLiDAR iOS DeviceπŸ‡
πŸ–₯ Metashape🐒
πŸ›  CustomPosesπŸ‡

5. Advanced Options

Training models other than nerfacto

We provide other models than nerfacto, for example if you want to train the original nerf model, use the following command,

ns-train vanilla-nerf --data DATA_PATH

For a full list of included models run ns-train --help.

Modify Configuration

Each model contains many parameters that can be changed, too many to list here. Use the --help command to see the full list of configuration options.

ns-train nerfacto --help

Tensorboard / WandB

We support three different methods to track training progress, using the viewer, tensorboard, and Weights and Biases. You can specify which visualizer to use by appending --vis {viewer, tensorboard, wandb} to the training command. Note that only one may be used at a time. Additionally the viewer only works for methods that are fast (ie. nerfacto, instant-ngp), for slower methods like NeRF, use the other loggers.

Learn More

And that's it for getting started with the basics of nerfstudio.

If you're interested in learning more on how to create your own pipelines, develop with the viewer, run benchmarks, and more, please check out some of the quicklinks below or visit our documentation directly.

SectionDescription
DocumentationFull API documentation and tutorials
ViewerHome page for our web viewer
πŸŽ’ Educational
Model DescriptionsDescription of all the models supported by nerfstudio and explanations of component parts.
Component DescriptionsInteractive notebooks that explain notable/commonly used modules in various models.
πŸƒ Tutorials
Getting StartedA more in-depth guide on how to get started with nerfstudio from installation to contributing.
Using the ViewerA quick demo video on how to navigate the viewer.
Using Record3DDemo video on how to run nerfstudio without using COLMAP.
πŸ’» For Developers
Creating pipelinesLearn how to easily build new neural rendering pipelines by using and/or implementing new modules.
Creating datasetsHave a new dataset? Learn how to run it with nerfstudio.
ContributingWalk-through for how you can start contributing now.
πŸ’– Community
DiscordJoin our community to discuss more. We would love to hear from you!
TwitterFollow us on Twitter @nerfstudioteam to see cool updates and announcements

Supported Features

We provide the following support structures to make life easier for getting started with NeRFs. For a full description, please refer to our features page.

If you are looking for a feature that is not currently supported, please do not hesitate to contact the Nerfstudio Team on Discord!

Built On

<a href="https://github.com/brentyi/tyro"> <!-- pypi-strip --> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://brentyi.github.io/tyro/_static/logo-dark.svg" /> <!-- /pypi-strip --> <img alt="tyro logo" src="https://brentyi.github.io/tyro/_static/logo-light.svg" width="150px" /> <!-- pypi-strip --> </picture> <!-- /pypi-strip --> </a> <a href="https://github.com/KAIR-BAIR/nerfacc"> <!-- pypi-strip --> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://user-images.githubusercontent.com/3310961/199083722-881a2372-62c1-4255-8521-31a95a721851.png" /> <!-- /pypi-strip --> <img alt="tyro logo" src="https://user-images.githubusercontent.com/3310961/199084143-0d63eb40-3f35-48d2-a9d5-78d1d60b7d66.png" width="250px" /> <!-- pypi-strip --> </picture> <!-- /pypi-strip --> </a>

Citation

If you use this library or find the documentation useful for your research, please consider citing:

@misc{nerfstudio,
      title={Nerfstudio: A Framework for Neural Radiance Field Development},
      author={Matthew Tancik*, Ethan Weber*, Evonne Ng*, Ruilong Li, Brent Yi,
              Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi,
              Abhik Ahuja, David McAllister, Angjoo Kanazawa},
      year={2022},
      url={https://github.com/nerfstudio-project/nerfstudio},
}

Contributors

<a href="https://github.com/nerfstudio-project/nerfstudio/graphs/contributors"> <img src="https://contrib.rocks/image?repo=nerfstudio-project/nerfstudio" /> </a>