Awesome
<div align="center">Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation
Shuling Zhao<sup>1</sup> Fa-Ting Hong<sup>1</sup> Xiaoshui Huang<sup>2</sup> Dan Xu<sup>1</sup>
<sup>1</sup>The Hong Kong University of Science and Technology <br> <sup>2</sup>Shanghai Jiao Tong University
<a href='https://shaelynz.github.io/synergize-motion-appearance/'><img src='https://img.shields.io/badge/Project-Page-green.svg'></a> <a href='https://arxiv.org/abs/2412.00719'><img src='https://img.shields.io/badge/Paper-arXiv-red'></a>
<img src="assets/generalization_result_1_min.gif" width="100%"/> <img src="assets/generalization_result_2_min.gif" width="100%"/> <img src="assets/generalization_result_3_min.gif" width="100%"/> <!--<img src="assets/sota_comparison.png" width=100%/>-->Cross-identity Reenactment Results
<img src="assets/video1_min.gif" width="49%"/> <img src="assets/video3_min.gif" width="49%"/> <img src="assets/video2_min.gif" width="49%"/> <img src="assets/video4_min.gif" width="49%"/>
Method
<img src="assets/overview.jpg" width="100%"/> </div>Updates
2024/11/29
: We released this repo.
Acknowledgement
Our implementation is based on FOMM, MRFA and CodeFormer. We appreciate their great works.
Citation
@misc{zhao2024synergizingmotionappearancemultiscale,
title={Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation},
author={Shuling Zhao and Fa-Ting Hong and Xiaoshui Huang and Dan Xu},
year={2024},
eprint={2412.00719},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.00719},
}
Contact
If you have any question or collaboration needs, please email szhaoax@connect.ust.hk
.