Home

Awesome

<div align="center"> <h1>Relighting4D: Neural Relightable Human from Videos</h1> <div> <a href='https://frozenburning.github.io/' target='_blank'>Zhaoxi Chen</a>&emsp; <a href='https://liuziwei7.github.io/' target='_blank'>Ziwei Liu</a> </div> <div> S-Lab, Nanyang Technological University </div>

<strong><a href='https://eccv2022.ecva.net/' target='_blank'>ECCV 2022</a></strong>

Project Page | Video | Paper

<tr> <img src="https://github.com/FrozenBurning/FrozenBurning.github.io/blob/master/projects/relighting4d/img/teaser.gif" width="100%"/> </tr> </div>

Updates

[08/2022] Model weights released. Google Drive

[07/2022] Paper uploaded to arXiv. arXiv

[07/2022] Code released.

Citation

If you find our work useful for your research, please consider citing this paper:

@inproceedings{chen2022relighting,
    title={Relighting4D: Neural Relightable Human from Videos},
    author={Zhaoxi Chen and Ziwei Liu},
    booktitle={ECCV},
    year={2022}
}

Installation

We recommend using Anaconda to manage your python environment. You can setup the required environment by the following command:

conda env create -f environment.yml
conda activate relighting4d

Datasets

People-Snapshot

We follow NeuralBody for data preparation.

  1. Download the People-Snapshot dataset here.

  2. Process the People-Snapshot dataset using the script.

  3. Create a soft link:

    cd /path/to/Relighting4D
    mkdir -p data
    cd data
    ln -s /path/to/people_snapshot people_snapshot
    

ZJU-MoCap

Please refer to here for requesting the download link. Once downloaded, don't forget to add a soft link:

cd /path/to/Relighting4D
mkdir -p data
cd data
ln -s /path/to/zju_mocap zju_mocap

Training

We first reconstruct an auxiliary density field in Stage I and then train the whole pipeline in Stage II. All trainings are done on a Tesla V100 GPU with 16GB memory.

Take the training on female-3-casual as an example.

Rendering

To relight a human performer from the trained video, our model requires an HDR environment map as input. We provide 8 HDR maps at light-probes. You can also use your own HDRIs or download some samples from Poly Haven.

You are welcome to download our checkpoints from Google Drive.

Here, we take the rendering on female-3-casual as an example.

The results of rendering are located at /data/render/. For example, rendering results with courtyard HDR environment are shown as follows:

<table> <tr> <td align='center' width='50%'><img src="https://frozenburning.github.io/projects/relighting4d/img/nview.gif" width="100%"/></td> <td align='center' width='50%'><img src="https://frozenburning.github.io/projects/relighting4d/img/npose.gif" width="100%"/></td> </tr> </table>

Acknowledgements

This work is supported by the National Research Foundation, Singapore under its AI Singapore Programme, NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

Relighting4D is implemented on top of the NeuralBody codebase.