Awesome
StyleSwap: Style-Based Generator Empowers Robust Face Swapping (ECCV 2022)
Zhiliang Xu, Hang Zhou, Zhibin Hong, Ziwei Liu, Jiaming Liu, Zhizhi Guo, Junyu Han, Jingtuo Liu, Errui Ding and Jingdong Wang
Project | Paper | Demo
In this work, we introduce a concise and effective framework named StyleSwap. Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping, thus the generator’s advantage can be adopted for optimizing identity similarity. We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
<img src='./misc/StyleSwap.png' width=880>Code
Code will be released soon.
Citation
If you find our work useful, please cite:
@inproceedings{xu2022styleswap,
title = {StyleSwap: Style-Based Generator Empowers Robust Face Swapping},
author = {Xu, Zhiliang and Zhou, Hang and Hong, Zhibin and Liu, Ziwei and Liu, Jiaming and Guo, Zhizhi and Han, Junyu and Liu, Jingtuo and Ding, Errui and Wang, Jingdong},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022}
}