Home

Awesome

StyleLight: HDR Panorama Generation for Lighting Estimation and Editing

Project | YouTube | arXiv

<img src='assets/teaser4_page-0001.jpeg' width=100%>

Abstract: We present a new lighting estimation and editing framework to generate high-dynamic-range (HDR) indoor panorama lighting from a single limited field-of-view (FOV) image captured by low-dynamic-range (LDR) cameras. Existing lighting estimation methods either directly regress lighting representation parameters or decompose this problem into FOV-to-panorama and LDR-to-HDR lighting generation sub-tasks. However, due to the partial observation, the high-dynamic-range lighting, and the intrinsic ambiguity of a scene, lighting estimation remains a challenging task. To tackle this problem, we propose a coupled dual-StyleGAN panorama synthesis network (StyleLight) that integrates LDR and HDR panorama synthesis into a unified framework. The LDR and HDR panorama synthesis share a similar generator but have separate discriminators. During inference, given an LDR FOV image, we propose a focal-masked GAN inversion method to find its latent code by the LDR panorama synthesis branch and then synthesize the HDR panorama by the HDR panorama synthesis branch. StyleLight takes FOV-to-panorama and LDR-to-HDR lighting generation into a unified framework and thus greatly improves lighting estimation. Extensive experiments demonstrate that our framework achieves superior performance over state-of-the-art methods on indoor lighting estimation. Notably, StyleLight also enables intuitive lighting editing on indoor HDR panoramas, which is suitable for real-world applications.

Guangcong Wang, Yinuo Yang, Chen Change Loy, Ziwei Liu

S-Lab, Nanyang Technological University

In European Conference on Computer Vision (ECCV), 2022

0. Update

1. Prerequisites

2. Installation

We recommend using the virtual environment (conda) to run the code easily.

conda create -n StyleLight python=3.7 -y
conda activate StyleLight
pip install lpips
pip install wandb
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch

pip install matplotlib
pip install dlib
pip install imageio
pip install einops

sudo apt-get install openexr and libopenexr-dev
pip install OpenEXR

pip install imageio-ffmpeg
pip install ninja
pip install opencv-python

3. Training

Download dataset

Pre-process datasets

python data_prepare_laval.py

Train StyleLight

python train.py --outdir=./training-runs-256x512 --data=/mnt/disks/data/datasets/IndoorHDRDataset2018-128x256-data-splits/train --gpus=8 --cfg=paper256  --mirror=1 --aug=noaug

Or download inference model

4. Test

Lighting estimation and editing

python test_lighting.py

5. To-Do

6. Citation

If you find this useful for your research, please cite the our paper.

@inproceedings{wang2022stylelight,
   author    = {Wang, Guangcong and Yang, Yinuo and Loy, Chen Change and Liu, Ziwei},
   title     = {StyleLight: HDR Panorama Generation for Lighting Estimation and Editing},
   booktitle = {European Conference on Computer Vision (ECCV)},   
   year      = {2022},
  }

or

Guangcong Wang, Yinuo Yang, Chen Change Loy, and Ziwei Liu. StyleLight: HDR Panorama Generation for Lighting Estimation and Editing, ECCV 2022.

7. Related Links

Text2Light: Zero-Shot Text-Driven HDR Panorama Generation, TOG 2022 (Proc. SIGGRAPH Asia)

CaG: Traditional Classification Neural Networks are Good Generators: They are Competitive with DDPMs and GANs, Technical report, 2022

SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections, Arxiv 2023

Relighting4D: Neural Relightable Human from Videos, ECCV 2022

Fast-Vid2Vid: Spatial-Temporal Compression for Video-to-Video Synthesis, ECCV 2022

Gardner et al. Learning to Predict Indoor Illumination from a Single Image, SIGGRAPH Asia, 2017.

Gardner et al. Deep Parametric Indoor Lighting Estimation, ICCV 2019.

Zhan et al. EMlight:Lighting Estimation via Spherical Distribution Approximation, AAAI 2021.

8. Acknowledgments

This code is based on the StyleGAN2-ada-pytorch, PTI, and skylibs codebases. We also thank Jean-François Lalonde for sharing experience.