Awesome
Instance-NeRF
<img src="https://img.shields.io/badge/Cite-BibTex-orange">
Instance Neural Radiance Field [Instance-NeRF, ICCV 2023].
This is the official PyTorch implementation of Instance-NeRF.
Instance Neural Radiacne Field
Yichen Liu*, Benran Hu*, Junkai Huang*, Yu-Wing Tai, Chi-Keung Tang
IEEE/CVF International Conference on Computer Vision (ICCV), 2023
* indicates equal contribution
Instance-NeRF Model Architecture
<img src="imgs/main.png" width="830"/>Installation
First, clone this repo and the submodules.
git clone https://github.com/lyclyc52/Instance_NeRF.git --recursive
Ther are two submodules used in the repo:
- RoIAlign.pytorch: It will be used in the NeRF-RCNN training. We adapt the 2D RoIAlign to 3D input.
- torch-ngp: We modified torch-ngp to add instance field training.
To install Instance-NeRF:
- Create a conda environment:
conda env create -f environment.yml
conda activate instance_nerf
- Follow the instructions in RoIAlign.PyTorch and torch-ngp to compile the extensions and install related packages.
Train Instance-NeRF
An overview of the entire training process:
- Train NeRF models of the scenes and extract the RGB and density.
- Train a NeRF-RCNN model using the extracted features and 3D annotations.
- Perform inference on unseen NeRF scenes to get discrete 3D masks.
- Run Mask2Former to get the initial 2D segmentation masks of the scenes. Use the 3D masks to match 2D masks.
- Train an instance field with the masks aligned, and optionally refine the NeRF-produced masks with CascadePSP and repeat NeRF training.
For step 1-3, please refer to the documentation in nerf_rcnn.
For step 4-5, please check the docs in instance_nerf.
Inference
We provide an example to use our code.
- Create the envirnoment, download the dataset and the checkpoint of NeRF-RCNN
- Predict the coarse 3D mask using the sample script here
- Download the NeRF training data. Instance field is a scene-specific model so you only need to download the scenes you want here. The following insturctions are under instance_nerf repo.
- Train the NeRF model using the sample script
- Prepare masks following the Mask Preparation Section. Basically, it contains three steps:
- Produce 2D instance segmentation masks by Mask2Former using this sample code. The detailed instructions are in this README under this repo.
- Produce 2D projected segmentation masks. You can find the script under our torch-ngp repo
- Match these 2D masks using this sample code under this repo.
- Train the instance NeRF following the Instance Field Inference section under our torch-ngp repo
Pre-trained Weights
NeRF-RCNN
You can download our pre-trained NeRF-RPN and NeRF-RCNN models here.
To train from scratch, first you need to train a NeRF-RPN model. It is based on NeRF-RPN and we disable the --rotated_box
flag.
We provide sample training and testing shell scripts called train/test_rpn/rcnn.sh
for NeRF-RPN and NeRF-RCNN under nerf_rcnn folder,
Dataset
We extended the 3D-FRONT NeRF Dataset used in NeRF-RPN by increasing the number of scenes from ~250 to ~1k, adding instance labels for each object, as well as including 2D and 3D instance segmentation masks. The entire dataset we used for training is availible here.
For training Instance-NeRF, as well as NeRF-RCNN on your custom datasets, please refer to both the NeRF-RPN dataset creation and the NeRF-RCNN training guide.
NeRF-RCNN Dataset Creation
We build our dataset based on 3D-FRONT.
If you want to know more details, you can refer to this forked BlenderProc repo for how we generate our data. For NeRF training and feature extraction, please check this repo. To predict RoIs, please check NeRF-RPN.
Note: The pre-trained NeRF-RPN model we published here is different from that in NeRF-RPN repo. In this paper, we use AABB bounding boxes and include more data in our training.
Citation
If you find Instance-NeRF useful in your research or refer to the provided baseline results, please star :star: this repository and consider citing :pencil::
@inproceedings{instancenerf,
title = {Instance Neural Radiance Field},
author = {Liu, Yichen and Hu, Benran and Huang, Junkai and Tai, Yu-Wing and Tang, Chi-Keung},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2023}
}