Home

Awesome

3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping

<img src="./assets/teaser.jpg" width="96%" height="96%">

Zhuoqian Yang, Shikai Li, Wayne Wu, Bo Dai <br> [Video Demo] | [Project Page] | [Paper]

Abstract: We present 3DHumanGAN, a 3D-aware generative adversarial network (GAN) that synthesizes images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it allows us to harness the power of 2D GANs to generate photo-realistic images; ii) it generates consistent images under varying view-angles and specifiable poses; iii) the model can benefit from the 3D human prior. Our model is adversarially learned from a collection of web images needless of manual annotation. <br>

Getting Started

Please see doc/INSTALL.md for setting up the project environment. Please see doc/GET_STARTED.md for an inference tutorial.

TODOs

Related Work

Citation

If you find this work useful for your research, please consider citing our paper:

@inproceedings{yang20233dhumangan,
  title={3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping},
  author={Yang, Zhuoqian and Li, Shikai and Wu, Wayne and Dai, Bo},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={23008--23019},
  year={2023}
}