Awesome
iNeRF
Project Page | Video | Paper
<img src="https://user-images.githubusercontent.com/7057863/161620132-2ce16dca-53f6-413d-97ab-fe6086f1661c.gif" height=200>PyTorch implementation of iNeRF, an RGB-only method that inverts neural radiance fields (NeRFs) for 6DoF pose estimation.
iNeRF Inverting Neural Radiance Fields for Pose Estimation
Lin Yen-Chen<sup>1</sup>,
Pete Florence<sup>2</sup>,
Jonathan T. Barron<sup>2</sup>,
Alberto Rodriguez<sup>1</sup>,
Phillip Isola<sup>1</sup>,
Tsung-Yi Lin<sup>2</sup><br>
<sup>1</sup>MIT, <sup>2</sup>Google
<br>
IROS 2021
Overview
This preliminary codebase currently only shows how to apply iNeRF with pixelNeRF. However, iNeRF can work with the original NeRF as well.
Environment setup
To start, create the environment using conda:
cd pixel-nerf
conda env create -f environment.yml
conda activate pixelnerf
pip install mediapy
pip install jupyter
Please make sure you have up-to-date NVIDIA drivers supporting CUDA 10.2 at least.
Quick start
-
Download all pixelNeRF's pretrained weight files from here. Extract this to
./pixel-nerf/checkpoints/
, so that./pixel-nerf/checkpoints/srn_car/pixel_nerf_latest
exists. -
Launch the Jupyter notebook.
cd pixel-nerf
jupyter notebook
- Open
pose_estimation.ipynb
and run through it. You can preview the results here. In the following, we show the overlay of images rendered with our predicted poses and the target image.
BibTeX
@inproceedings{yen2020inerf,
title={{iNeRF}: Inverting Neural Radiance Fields for Pose Estimation},
author={Lin Yen-Chen and Pete Florence and Jonathan T. Barron and Alberto Rodriguez and Phillip Isola and Tsung-Yi Lin},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems ({IROS})},
year={2021}
}
Acknowledgements
This implementation is based on Alex Yu's pixel-nerf.