Awesome
Relight My Nerf - A Dataset for Novel View Synthesis and Relighting of Real World Objects
This package contains commands to manage the dataset for the Relight My Nerf project. The architecture of the network is not included in this package, nor the training scripts. This package is only for managing the dataset.
Install | Download | Structure | What's inside | How to Use it | Benchmark
<br>š·āāļø Install the management package
To install the package, run the following command in a python venv:
git clone git@github.com:eyecan-ai/rene.git
cd rene
pip install .
It has been tested with python 3.9 on Ubuntu 22.04, should work even with python >= 3.8 and hopefully in another OS.
ā¬ļø Download the dataset
You can download the dataset from this Google Drive Folder and extract the files contained in the zip archive in /path/to/rene_parent_folder
. The structure of the extracted folder should be the following:
š rene_parent_folder
āāā š apple
āāā š cheetah
āāā š cube
...
āāā š tapes
āāā š trucks
āāā š wooden
š Dataset structure
Each scene folder contains the following files:
š rene_parent_folder
āāā š apple
ā āāā š lset000
ā āāā šļø camera.yml
ā āāā š light.txt
ā āāā š data
ā āāā š¼ļø 00000_image.png
ā āāā š 00000_pose.txt
ā ...
ā āāā š¼ļø 00048_image.png
ā āāā š 00049_pose.txt
ā āāā š lset001
ā ...
ā āāā š lset039
...
Additional notes:
- Poses
XXXXX_poses.txt
andlight.txt
are 4x4 homogeneous matrices $^wT_c$ that transform points expressed in the camera reference frame $c$ to points expressed in the world reference frame $w$, such that: $^wp={^wT_c} \cdot {^cp}$. [Upward: +Z] - Camera parameters
camera.yml
are the one returned by OpenCV'scalibrateCamera()
function, convention is the one of COLMAP/OpenCV. [Forward: +Z, Up: -Y, Right: +X]. - Test images are blacked out and their indices are following the table on the paper.
- We also recovered an empty scene (only lighting change, no objects) but it's not officially included in the dataset, it's not used in the paper and the following scripts won't support it natively.
šļø Show the dataset
As a check if everying went well, you can show the dataset with the following command:
rene show +i /path/to/rene_parent_folder
This will show a window similar to the following:
https://github.com/eyecan-ai/rene/assets/23316277/51fac737-05ed-4d20-bdac-3687f44f4f1d
š¼ Handle the dataset
We use pipelime-python
to handle the dataset, it's automatically installed when you install the package, but you can find the documentation here.
A simple script to load the dataset is the following:
import matplotlib.pyplot as plt
from rene.utils.loaders import ReneDataset
# Lazy load the dataset
rene = ReneDataset(input_folder="/path/to/rene_parent_folder")
# To get a sample, you can do the following:
sample = rene["cube"][18][36] # <- scene=cube, light_pose=18, camera_pose=36
# Each sample contains [camera, image, pose, light] keys
# To actually load an image you can do this:
image = sample["image"]() # <- Notice the `()` at the end!
# And use the item as you wish
plt.imshow(image)
plt.show()
To see more advanced examples you can always check the folder examples
library.
šŖ Contribute to the Benchmark
To send your test images you will need to upload and send the link of a zip file with the following structure:
š¦ rene_test_images.zip
āāā š¼ļø apple_00_04.png
āāā š¼ļø apple_00_08.png
āāā š¼ļø apple_00_15.png
...
āāā š¼ļø apple_39_04.png
āāā š¼ļø apple_39_08.png
āāā š¼ļø apple_39_15.png
āāā š¼ļø cheetah_00_04.png
...
āāā š¼ļø cheetah_39_15.png
āāā š¼ļø cube_00_04.png
...
The format for each image is: {scene_name}_{light_idx}_{cam_idx}.png
and they need to be at the root level of the zip file.
Each scene will have 111 images for the easy test and 9 for the hard test, for a total of 120 * 20 = 2400 images, your zip archive should contain exactly this number of files.
At the time of writing, the link of your zip file should be sent to any email address with the suffix eyecan.ai
present in the paper.
šļø Citation
If you find this dataset useful, please give us a github star, if you were crazy enough to download the dataset and it was useful to you in some way for your work, it would be great if you would cite us:
@InProceedings{Toschi_2023_CVPR,
author = {Toschi, Marco and De Matteo, Riccardo and Spezialetti, Riccardo and De Gregorio, Daniele and Di Stefano, Luigi and Salti, Samuele},
title = {ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of Real World Objects},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {20762-20772}
}