Home

Awesome

<p align="center"> <a href="https://arxiv.org/abs/2403.17881"> <img width="765" alt="image" src="assets/title.png"> </a> <p align="center"> <a href="https://scholar.google.com.hk/citations?user=1yhGS5sAAAAJ&hl=zh-CN"><strong>Gan Pei <sup>1</sup><sup>*</sup></strong></a> . <a href="https://zhangzjn.github.io/"><strong>Jiangning Zhang <sup>2</sup><sup>*</sup></strong></a> . <a href="https://scholar.google.com.hk/citations?user=8-Vo9cUAAAAJ&hl=zh-CN"><strong>Menghan Hu<sup>1</sup></strong></a> . <a href="https://scholar.google.com.hk/citations?hl=zh-CN&user=4daxK2AAAAAJ"><strong>Zhenyu Zhang<sup>3</sup></strong></a> . <a href="https://scholar.google.com.hk/citations?hl=zh-CN&user=fqte5H4AAAAJ"><strong>Chengjie Wang<sup>2</sup></strong></a> . <a href="https://github.com/flyingby/Awesome-Deepfake-Generation-and-Detection"><strong>Yunsheng Wu<sup>2</sup></strong></a>. <p align="center"> <a href="https://scholar.google.com.hk/citations?user=E6zbSYgAAAAJ&hl=zh-CN"><strong>Guangtao Zhai<sup>4</sup></strong></a> . <a href="https://scholar.google.com.hk/citations?hl=zh-CN&user=6CIDtZQAAAAJ"><strong>Jian Yang<sup>3</sup></strong></a> . <a href="https://scholar.google.com.hk/citations?user=Ljk2BvIAAAAJ&hl=zh-CN&oi=ao"><strong>Chunhua Shen<sup>5</sup></strong></a> . <a href="https://scholar.google.com.hk/citations?user=RwlJNLcAAAAJ&hl=zh-CN&oi=ao"><strong>Dacheng Tao<sup>6</sup></strong></a> </p> <p align="center"> <strong><sup>1</sup>East China Normal University</strong> &nbsp;&nbsp;&nbsp; <strong><sup>2</sup>Tencent Youtu Lab</strong> &nbsp;&nbsp;&nbsp; <strong><sup>3</sup>Nanjing University</strong> &nbsp;&nbsp;&nbsp; <strong><sup>4</sup>Shanghai Jiao Tong University</strong> <br> <strong><sup>5</sup>Zhejiang University</strong> &nbsp;&nbsp;&nbsp; <strong><sup>6</sup>Nanyang Technological University</strong> <p align="center"> <a href='https://arxiv.org/abs/2403.17881'> <img src='https://img.shields.io/badge/arXiv-PDF-green?style=flat&logo=arXiv&logoColor=green' alt='arXiv PDF'> </a>

We research Deepfake Generation and Detection

This work focuses on the aspect of facial manipulation in Deepfake, encompassing Face Swapping, Face Reenactment, Talking Face Generation, Face Attribute Editing and Forgery Detection. We believe this will be the most comprehensive survey to date on facial manipulation and detection technologies. Please stay tuned!😉😉😉

✨You are welcome to provide us your work with a topic related to deepfake generation or detection!!!

If you discover any missing work or have any suggestions, please feel free to submit a pull request or contact us. We will promptly add the missing papers to this repository.

✨Highlight!!!

[1] A comprehensive survey for visual Deepfake, including Deepfake generation/detection.

[2] It also contains several related domains, including Heas Swapping, Face Super-resolution, Face Reconstruction, Face Inpainting, Body Animation, Portrait Style Transfer, Makeup Transfer and Adversarial Sample Detection.

[3] We list detailed results for the most representative works.

✨Survey pipeline

<img src="assets/pipeline.png" width.="1000px">

Introduction

This work presents a detailed survey on generation and detection tasks about face-related generation, including Face Swapping, Face Reenactment, Talking Face Generation, and Face Attribute Editing. In addition, we also introduce several related fields such as Head Swap, Face Super-resolution, Face Reconstruction, Face Inpainting, etc., and select some of them to expand.

</p> <img src="assets/task.png" width.="1000px">

Summary of Contents

Methods: A Survey

Face Swapping

YearVenueCategoryPaper TitleCode
2024arXivOtherRank-based No-reference Quality Assessment for Face Swapping-
2024arXiv3DGSImplicitDeepfake: Plausible Face-Swapping through Implicit Deepfake Generation using NeRF and Gaussian SplattingCode
2024arXivGANsLatentSwap: An Efficient Latent Code Mapping Framework for Face Swapping-
2024arXivVAEsSelfSwapper: Self-Supervised Face Swapping via Shape Agnostic Masked AutoEncoder-
2024arXivDifussionFace Swap via Diffusion ModelCode
2024arXivGANsE4S: Fine-grained Face Swapping via Editing With Regional GAN InversionCode
2024ACM TOGGANsIdentity-Preserving Face Swapping via Dual Surrogate Generative ModelsCode
2024ESWAGANsFace swapping with adaptive exploration-fusion mechanism and dual en-decoding tactic-
2024ECCVDiffusionFace Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute ControlCode
2024T-PAMIGANsLearning Disentangled Representation for One-Shot Progressive Face SwappingCode
2024CVPRDifussionTowards a Simultaneous and Granular Identity-Expression Control in Personalized Face GenerationCode
2024ICIPGraphicRID-TWIN: An end-to-end pipeline for automatic face de-identification in videosCode
2024TCSVTVAEIdentity-Aware Variational Autoencoder for Face Swapping-
2024ICASSPGANs+3DAttribute-Aware Head Swapping Guided by 3d Modeling-
2024TMMOtherAn Efficient Attribute-Preserving Framework for Face Swapping-
2024TMMGANs+3DStableSwap: Stable Face Swapping in a Shared and Controllable Latent Space-
2023arXivGANsFlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping-
2023arXivGANsEnd-to-end Face-swapping via Adaptive Latent Representation Learning-
2023arXivDifussionA Generalist FaceX via Learning Unified Facial RepresentationCode
2023arXivCycle tripletsReliableSwap: Boosting General Face Swapping Via Reliable SupervisionCode
2023WACVVAEsFaceOff: A Video-to-Video Face Swapping System-
2023CVPRGANs+3DMMStyleIPSB: Identity-Preserving Semantic Basis of StyleGAN for High Fidelity Face SwappingCode
2023CVPRGANs+3DMM3D-Aware Face SwappingCode
2023CVPRGANsFine-Grained Face Swapping via Regional GAN InversionCode
2023WACVGANsFastSwap: A Lightweight One-Stage Framework for Real-Time Face SwappingCode
2023TECSGANs+VAEsXimSwap: many-to-many face swapping for TinyML-
2023WACVGANsFaceDancer: Pose- and Occlusion-Aware High Fidelity Face SwappingCode
2023ICCVGANsBlendFace: Re-designing Identity Encoders for Face-SwappingCode
2023ICCVGANs+3DMMReinforced Disentanglement for Face Swapping without Skip Connection-
2023CVPRGANsAttribute-preserving Face Dataset Anonymization via Latent Code OptimizatioCode
2023AAAIGANs+3DMMFlowFace: Semantic Flow-Guided Shape-Aware Face Swapping-
2023CVPRTransformersFace Transformer: Towards High Fidelity and Accurate Face Swapping-
2023ACM MMGANs+3DHigh Fidelity Face Swapping via Semantics Disentanglement and Structure Enhancement-
2023FGTransformersTransFS: Face Swapping Using Transformer-
2023CVPRDifussionDiffSwap: High-Fidelity and Controllable Face Swapping via 3D-Aware Masked DiffusionCode
2022arXivDifussionDiffFace: Diffusion-based Face Swapping with Facial GuidanceCode
2022AAAIGANsMobileFaceSwap: A Lightweight Framework for Video Face SwappingCode
2022T-PAMIGANsFSGANv2: Improved Subject Agnostic Face Swapping and ReenactmentCode
2022ICMEGANsMigrating face swap to mobile devices: a lightweight framework and a supervised training solutionCode
2022ECCVGANsStyleSwap: Style-Based Generator Empowers Robust Face SwappingCode
2022ECCVGANsDesigning One Unified Framework for High-Fidelity Face Reenactment and SwappingCode
2022ECCVGANs+3DMMMFIM: Megapixel Facial Identity Manipulation-
2022CVPRGANsRegion-Aware Face SwappingCode
2022CVPRDifussionSmooth-Swap: A Simple Enhancement for Face-Swapping with Smoothness-
2022CVPRGANsHigh-resolution Face Swapping via Latent Semantics DisentanglementCode
2021CVPRGANs+3DMMFaceInpainter: High Fidelity Face Adaptation to Heterogeneous Domains-
2021CVPRGANsInformation Bottleneck Disentanglement for Identity Swapping-
2021CVPRGANsOne Shot Face Swapping on MegapixelsCode
2021MMMGANsDeep Face Swapping via Cross-Identity Adversarial Training-
2021IJCAIGANs+3DMMHifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face SwappingCode
2020CVPRGANsFaceShifter: Towards High Fidelity And Occlusion Aware Face SwappingCode
2020CVPRGANsDeepFaceLab: Integrated, flexible and extensible face-swapping frameworkCode
2020NeurIPSGANsAOT: Appearance Optimal Transport Based Identity Swapping for Forgery DetectionCode
2020ACM MMGANs+VAEsSimSwap: An Efficient Framework For High Fidelity Face SwappingCode
2020AAAIGANs+VAEsDeepfakes for Medical Video De-Identification: Privacy Protection and Diagnostic Information Preservation-

Face Reenactment

YearVenuePaper TitleCode
2024arXivLivePortrait: Efficient Portrait Animation with Stitching and Retargeting ControlCode
2024arXivAniFaceDiff: High-Fidelity Face Reenactment via Facial Parametric Conditioned Diffusion Models-
2024arXivAnchored Diffusion for Video Face Reenactment-
2024arXivLearning Online Scale Transformation for Talking Head Video Generation-
2024arXivVOODOO XP: Expressive One-Shot Head Reenactment for VR Telepresence-
2024arXiv3DFlowRenderer: One-shot Face Re-enactment via Dense 3D Facial Flow EstimationCode
2024arXivExport3D: Learning to Generate Conditional Tri-plane for 3D-aware Expression-Controllable Portrait AnimationCode
2024arXivLearning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait AnimationCode
2024arXivSuperior and Pragmatic Talking Face Generation with Teacher-Student FrameworkCode
2024arXivDiffusionAct: Controllable Diffusion Autoencoder for One-shot Face ReenactmentCode
2024BMVCG3FA: Geometry-guided GAN for Face AnimationCode
2024ECCVFace Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute ControlCode
2024WACVCVTHead: One-shot Controllable Head Avatar with Vertex-feature TransformerCode
2024CVPRPose Adapted Shape Learning for Large-Pose Face ReenactmentCode
2024CVPRFSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance, Head-pose, and Facial Expression FeaturesCode
2024ICASSPExpression Domain Translation Network for Cross-Domain Head Reenactment-
2024AAAILearning Dense Correspondence for NeRF-Based Face Reenactment-
2024AAAIFG-EmoTalk: Talking Head Video Generation with Fine-Grained Controllable Facial Expressions-
2024IJCVOne-shot Neural Face Reenactment via Finding Directions in GAN's Latent Space-
2024PRMaskRenderer: 3D-Infused Multi-Mask Realistic Face Reenactment-
2023T-PAMIFree-headgan: Neural talking head synthesis with explicit gaze control-
2023CVPRHigh-Fidelity and Freely Controllable Talking Head Video GenerationCode
2023NeurIPSLearning Motion Refinement for Unsupervised Face AnimationCode
2023ICCVRobust One-Shot Face Video Re-enactment using Hybrid Latent Spaces of StyleGAN2Code
2023ICCVToonTalker: Cross-Domain Face Reenactment-
2023ICCVHyperReenact: One-Shot Reenactment via Jointly Learning to Refine and Retarget FacesCode
2023CVPRMetaPortrait: Identity-Preserving Talking Head Generation with Fast Personalized AdaptationCode
2023CVPRParametric Implicit Face Representation for Audio-Driven Facial Reenactment-
2023CVPROne-shot high-fidelity talking-head synthesis with deformable neural radiance fieldCode
2023FGStylemask: Disentangling the style space of stylegan2 for neural face reenactmentCode
2022ECCVFace2Faceρ: Real-Time High-Resolution One-Shot Face Reenactment-
2022CVPRDual-Generator Face Reenactment-
2021ICCVPIRenderer: Controllable Portrait Image Generation via Semantic Neural RenderingCode
2021ICCVHeadgan: One-shot neural head synthesis and editing-
2020CVPRFReeNet: Multi-Identity Face Reenactment-
2020FGHead2Head: Videobased neural head synthesis-
2020ECCVFast bilayer neural synthesis of one-shot realistic head avatars-
2020AAAIMarioNETte: Few-Shot Face Reenactment Preserving Identity of Unseen Targets-
2019ACM TOGDeferred Neural Rendering: Image Synthesis using Neural Textures-
2019ACM TOGNeural style-preserving visual dubbing-
2019ICCVFew-Shot Adversarial Learning of Realistic Neural Talking Head Models-
2018CVPRX2Face: A network for controlling face generation using images, audio, and pose codes-
2018ACM TOGDeep video portraits-
2018NeurIPSVideo to video synthesisCode
2016CVPRFace2Face: Real-time Face Capture and Reenactment of RGB Videos-

Talking Face Generation

YearVenueCategoryPaper TitleCode
2024arXivKANKAN-Based Fusion of Dual-Domain for Audio-Driven Facial Landmarks Generation-
2024arXivGANsSegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing-
2024arXivDiffusionSVP: Style-Enhanced Vivid Portrait Talking Head Diffusion Model-
2024arXivDiffusionPoseTalk: Text-and-Audio-based Pose Control and Motion Refinement for One-Shot Talking Head GenerationCode
2024arXivAudioJambaTalk: Speech-Driven 3D Talking Head Generation Based on Hybrid Transformer-Mamba Model-
2024arXivAudioEmoFace: Emotion-Content Disentangled Speech-Driven 3D Talking Face with Mesh Attention-
2024arXivAudioMeta-Learning Empowered Meta-Face: Personalized Speaking Style Adaptation for Audio-Driven 3D Talking Face Animation-
2024arXivVQ-VAEGLDiTalker: Speech-Driven 3D Facial Animation with Graph Latent Diffusion Transformer-
2024arXivVQ-VAEDEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation-
2024arXivDiffusionHigh-fidelity and Lip-synced Talking Face Synthesis via Landmark-based Diffusion Model-
2024arXivDiffusionStyle-Preserving Lip Sync via Audio-Aware Style Reference-
2024arXivDiffusionText-based Talking Video Editing with Cascaded Conditional Diffusion-
2024arXivAudioAudio-driven High-resolution Seamless Talking Head Video Editing via StyleGAN-
2024arXivAudioRealTalk: Real-time and Realistic Audio-driven Face Generation with 3D Facial Prior-guided Identity Alignment Network-
2024arXiv3D ModelNLDF: Neural Light Dynamic Fields for Efficient 3D Talking Head Generation-
2024arXivAudioEmotional Conversation: Empowering Talking Faces with Cohesive Expression, Gaze and Pose Generation-
2024arXivAudioControllable Talking Face Generation by Implicit Facial Keypoints Editing-
2024arXivAudioOpFlowTalker: Realistic and Natural Talking Face Generation via Optical Flow Guidance-
2024arXivMultimodalListen, Disentangle, and Control: Controllable Speech-Driven Talking Head GenerationCode
2024arXivAudioAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion EncodingCode
2024arXiv3D ModelNeRFFaceSpeech: One-shot Audio-driven 3D Talking Head Synthesis via Generative PriorCode
2024arXivAudioSwapTalk: Audio-Driven Talking Face Generation with One-Shot Customization in Latent SpaceCode
2024arXiv3D ModelGSTalker: Real-time Audio-Driven Talking Face Generation via Deformable Gaussian Splatting-
2024arXiv3D ModelEmbedded Representation Learning Network for Animating Styled Video Portrait-
2024arXiv3D ModelGaussianTalker: Real-Time High-Fidelity Talking Head Synthesis with Audio-Driven 3D Gaussian SplattingCode
2024arXiv3D ModelLearn2Talk: 3D Talking Face Learns from 2D Talking FaceCode
2024arXiv3D ModelGaussianTalker: Speaker-specific Talking Head Synthesis via 3D Gaussian SplattingCode
2024arXivAudioVASA-1: Lifelike Audio-Driven Talking Faces Generated in Real TimeCode
2024arXivAudioEmote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak ConditionsCode
2024arXivAudioVLOGGER: Multimodal Diffusion for Embodied Avatar SynthesisCode
2024arXivAudioAniPortrait: Audio-Driven Synthesis of Photorealistic Portrait AnimationsCode
2024arXivAudioTalk3D: High-Fidelity Talking Portrait Synthesis via Personalized 3D Generative PriorCode
2024arXivDiffusionMoDiTalker: Motion-Disentangled Diffusion Model for High-Fidelity Talking Head Generation-
2024arXivAudioEmoVOCA: Speech-Driven Emotional 3D Talking Heads-
2024arXivDiffusionContext-aware Talking Face Video Generation-
2024arXivAudioEmoSpeaker: One-shot Fine-grained Emotion-Controlled Talking Face Generation-
2024ACM MMAudioSegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing-
2024ACM MMDiffusionFD2Talk: Towards Generalized Talking Head Generation with Facial Decoupled Diffusion ModelCode
2024ACM MMMultimodalSyncTalklip: Highly Synchronized Lip-Readable Speaker Generation with Multi-Task LearningCode
2024VRAudioEmoFace: Audio-driven Emotional 3D Face AnimationCode
2024ECCV3D ModelS^3D-NeRF: Single-Shot Speech-Driven Neural Radiance Field for High Fidelity Talking Head Synthesis-
2024ECCVAudioKMTalk: Speech-Driven 3D Facial Animation with Key Motion EmbeddingCode
2024ECCV3D ModelTalkingGaussian: Structure-Persistent 3D Talking Head Synthesis via Gaussian SplattingCode
2024ECCVAudioEDTalk: Efficient Disentanglement for Emotional Talking Head SynthesisCode
2024IJCVAudioReliTalk: Relightable Talking Portrait Generation from a Single VideoCode
2024TCSVTAudioAudio-Semantic Enhanced Pose-Driven Talking Head GenerationCode
2024TCSVTAudioOSM-Net: One-to-Many One-shot Talking Head Generation with Spontaneous Head Motions-
2024IF3D ModelER-NeRF++: Efficient region-aware Neural Radiance Fields for high-fidelity talking portrait synthesis-
2024ICLR3D ModelReal3D-Portrait: One-shot Realistic 3D Talking Portrait SynthesisCode
2024ICLRDiffusionGAIA: ZERO-SHOT TALKING AVATAR GENERATIONCode
2024T-PAMIMultimodalStyleTalk++: A Unified Framework for Controlling the Speaking Styles of Talking Heads-
2024ICASSPDiffusionEmoTalker: Emotionally Editable Talking Face Generation via Diffusion Model-
2024ICASSPTextText-Driven Talking Face Synthesis by Reprogramming Audio-Driven Models-
2024ICASSPAudioSpeech-Driven Emotional 3d Talking Face Animation Using Emotional Embeddings-
2024ICASSPAudioExploring Phonetic Context-Aware Lip-Sync for Talking Face Generation-
2024ICASSPAudioTalking Face Generation for Impression Conversion Considering Speech Semantics-
2024ICASSP3D ModelNeRF-AD: Neural Radiance Field with Attention-based Disentanglement for Talking Face SynthesisCode
2024ICASSP3D ModelDT-NeRF: Decomposed Triplane-Hash Neural Radiance Fields For High-Fidelity Talking Portrait Synthesis-
2024ICASSPMultimodalTalking Face Generation for Impression Conversion Considering Speech Semantics-
2024ICAARTDiffusionDiT-Head: High-Resolution Talking Head Synthesis using Diffusion Transformers-
2024WACVAudioTHInImg: Cross-Modal Steganography for Presenting Talking Heads in Images-
2024WACVDiffusionDiffused Heads: Diffusion Models Beat GANs on Talking-Face GenerationCode
2024WACVAudioDR2: Disentangled Recurrent Representation Learning for Data-Efficient Speech Video Synthesis-
2024WACVAudioRADIO: Reference-Agnostic Dubbing Video Synthesis-
2024WACVAudioDiff2Lip: Audio Conditioned Diffusion Models for Lip-SynchronizationCode
2024CVPRTextFaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head ModelsCode
2024CVPRTextFaces that Speak: Jointly Synthesising Talking Face and Speech from TextCode
2024CVPRDiffusionDiffTED: One-shot Audio-driven TED Talk Video Generation with Diffusion-based Co-speech Gestures-
2024CVPRAudioFaceChain-ImagineID: Freely Crafting High-Fidelity Diverse Talking Faces from Disentangled AudioCode
2024CVPRAudioFlowVQTalker: High-Quality Emotional Talking Face Generation through Normalizing Flow and Quantization-
2024CVPRTextFaces that Speak: Jointly Synthesising Talking Face and Speech from TextCode
2024CVPR3D ModelSyncTalk: The Devil is in the Synchronization for Talking Head SynthesisCode
2024CVPR3D ModelLearning Dynamic Tetrahedra for High-Quality Talking Head SynthesisCode
2024CVPRW3D ModelNeRFFaceSpeech: One-shot Audio-driven 3D Talking Head Synthesis via Generative PriorCode
2024AAAI3D ModelAE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis-
2024AAAI3D ModelMimic: Speaking Style Disentanglement for Speech-Driven 3D Facial AnimationCode
2024AAAIAudioStyle2Talker: High-Resolution Talking Head Generation with Emotion Style and Art Style-
2024AAAIAudioAudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head-
2024AAAIAudioSay Anything with Any Style-
2023arXivAudioGMTalker: Gaussian Mixture based Emotional talking video PortraitsCode
2023arXivDiffusionDREAM-Talk: Diffusion-based Realistic Emotional Audio-driven Method for Single Image Talking Face GenerationCode
2023arXivDiffusionDreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic ModelsCode
2023arXivTextTalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles-
2023CVPRMultimodalHigh-Fidelity Generalized Emotional Talking Face Generation With Multi-Modal Emotion Space Learning-
2023CVPRMultimodalLipFormer: High-fidelity and Generalizable Talking Face Generation with A Pre-learned Facial Codebook-
2023CVPRAudioSadtalker: Learning realistic 3d motion coefficients for stylized audio-driven single image talking face animationCode
2023CVPRAudioSeeing What You Said: Talking Face Generation Guided by a Lip Reading Expert-
2023ICCVAudioSpeech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short VideoCode
2023ICCVAudioEMMN: Emotional Motion Memory Network for Audio-driven Emotional Talking Face Generation-
2023TNNLSAudioTalking Face Generation With Audio-Deduced Emotional Landmarks-
2023ICASSPAudioMemory-augmented contrastive learning for talking head generationCode
2023CVPRAudioIdentity-Preserving Talking Face Generation with Landmark and Appearance PriorsCode
2023TCSVTAudioStochastic Latent Talking Face Generation Towards Emotional Expressions and Head Poses-
2023ICCVAudioEfficient Emotional Adaptation for Audio-Driven Talking-Head GenerationCode
2023DisplaysAudioTalking face generation driven by time–frequency domain features of speech audio-
2023ICCVDiffusionTalking Head Generation with Probabilistic Audio-to-Visual Diffusion PriorsCode
2023ICCVAudioSPACE : Speech-driven Portrait Animation with Controllable ExpressionCode
2023ICCV3D ModelEmoTalk: Speech-Driven Emotional Disentanglement for 3D Face AnimationCode
2023DisplaysMultimodalFlow2Flow: Audio-visual cross-modality generation for talking face videos with rhythmic headCode
2023ACM MMDiffusionDAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion AutoencoderCode
2022CVPRMultimodalExpressive Talking Head Generation with Granular Audio-Visual Control-
2022TMMMultimodalMultimodal Learning for Temporally Coherent Talking Face Generation With Articulator SynergyCode
2022CVPRTextTalking Face Generation with Multilingual TTSCode
2022ECCVAudioLearning Dynamic Facial Radiance Fields for Few-Shot Talking Head SynthesisCode
2021ICCVAudioFACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute LearningCode
2021CVPRMultimodalPose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual RepresentationCode
2021ICCV3D ModelAD-NeRF: Audio Driven Neural Radiance Fields for Talking Head SynthesisCode
2021CVPRAudioAudio-driven emotional video portraitsCode
2020ICMRAudioA Lip Sync Expert Is All You Need for Speech to Lip Generatio In The WildCode
2020ACM TOGAudioMakeItTalk: Speaker-Aware Talking-Head AnimationCode

Facial Attribute Editing

YearVenueCategoryPaper TitleCode
2024arXivDiffusionV-LASIK: Consistent Glasses-Removal from Videos Using Synthetic DataCode
2024arXivGANsEfficient 3D-Aware Facial Image Editing via Attribute-Specific Prompt LearningCode
2024arXivDiffusionZero-shot Image Editing with Reference ImitationCode
2024arXivDiffusionFace2Face: Label-driven Facial Retouching Restoration-
2024arXivDiffusionFlashFace: Human Image Personalization with High-fidelity Identity PreservationCode
2024arXivNeRFFast Text-to-3D-Aware Face Generation and Manipulation via Direct Cross-modal Mapping and Geometric RegularizationCode
2024arXivGANsGANTASTIC: GAN-based Transfer of Interpretable Directions for Disentangled Image Editing in Text-to-Image Diffusion ModelsCode
2024arXivGANsS3Editor: A Sparse Semantic-Disentangled Self-Training Framework for Face Video Editing-
2024arXivGANsReference-Based 3D-Aware Image Editing with TriplaneCode
2024arXivGANs3D-aware Image Generation and Editing with Multi-modal Conditions-
2024arXivGANsReference-Based 3D-Aware Image Editing with Triplane-
2024arXivGANsSeFFeC: Semantic Facial Feature Control for Fine-grained Face Editing-
2024arXivDiffusionDiffFAE: Advancing High-fidelity One-shot Facial Appearance Editing with Space-sensitive Customization and Semantic Preservation-
2024arXivGANsSkull-to-Face: Anatomy-Guided 3D Facial Reconstruction and EditingCode
2024ECCV3DGSView-Consistent 3D Editing with Gaussian SplattingCode
2024ESWAGANsISFB-GAN: Interpretable semantic face beautification with generative adversarial network-
2024IJCVGANsManiCLIP: Multi-attribute Face Manipulation from TextCode
2024CVPRNeRFGeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single ImageCode
2024CVPRDiffusionDreamSalon: A Staged Diffusion Framework for Preserving Identity-Context in Editable Face Generation-
2024T-CSVTGANsInteractive Generative Adversarial Networks with High-Frequency Compensation for Facial Attribute Editing-
2024ICIGPGANsA novel method for facial attribute editing by integrating semantic segmentation and color rendering-
2024Information SciencesGANsICGNet: An intensity-controllable generation network based on covering learning for face attribute synthesisCode
2024ICASSPGANsSemantic Latent Decomposition with Normalizing Flows for Face EditingCode
2024AAAIGANsSDGAN: Disentangling Semantic Manipulation for Facial Attribute Editing-
2024WACVGANsEmoStyle: One-Shot Facial Expression Editing Using Continuous Emotion ParametersCode
2024WACVDiffusionPersonalized Face Inpainting With Diffusion Models by Parallel Visual Attention-
2024WACVGANsFace Identity-Aware Disentanglement in StyleGAN-
2024NeurIPSDiffusion+NeRFFaceDNeRF: Semantics-Driven Face Reconstruction, Prompt Editing and Relighting with Diffusion ModelsCode
2023CVPRDiffusionCollaborative Diffusion for Multi-Modal Face Generation and EditingCode
2023ICCVGANsConceptual and Hierarchical Latent Space Decomposition for Face Editing-
2023NNGANsIA-FaceS: A bidirectional method for semantic face editingCode
2023TPAMIGANs+NeRFCIPS-3D++: End-to-End Real-Time High-Resolution 3D-Aware GANs for GAN Inversion and Stylization-
2023SIGGRAPHGANs+3DMMClipFace: Text-guided Editing of Textured 3D Morphable ModelsCode
2023ICCVGANsTowards High-Fidelity Text-Guided 3D Face Generation and Manipulation Using only Images-
2023TPAMIGANsImage-to-Image Translation with Disentangled Latent Vectors for Face EditingCode
2023CVPRGANsDPE: Disentanglement of Pose and Expression for General Video Portrait EditingCode
2023ACM MMGANsPixelFace+: Towards Controllable Face Generation and Manipulation with Text Descriptions and Segmentation MasksCode
2022CVPRGANs+NeRFFENeRF: Face Editing in Neural Radiance FieldsCode
2022Neural NetworksGANsGuidedStyle: Attribute Knowledge Guided Style Manipulation for Semantic Face Editing-
2022SIGGRAPHGANs+NeRFFDNeRF: Few-shot Dynamic Neural Radiance Fields for Face Reconstruction and Expression EditingCode
2022CVPRGANsAnyFace: Free-style Text-to-Face Synthesis and Manipulation-
2022CVPRGANsTransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial EditingCode
2022SIGGRAPHGANs+NeRFNeRFFaceEditing: Disentangled Face Editing in Neural Radiance Fields-
2022TVCGGANs +3DCross-Domain and Disentangled Face Manipulation With 3D GuidanceCode
2021ICCVGANsA Latent Transformer for Disentangled Face Editing in Images and VideosCode
2021CVPRGANsHigh-Fidelity and Arbitrary Face EditingCode
2020JASGANsMU-GAN: Facial Attribute Editing Based on Multi-Attention MechanismCode
2020CVPRGANsInterpreting the Latent Space of GANs for Semantic Face EditingCode
2020ACCVGANsMagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network-

Forgery Detection

YearVenueCategoryPaper TitleCode
2024arXivData DrivenStanding on the Shoulders of Giants: Reprogramming Visual-Language Model for General Deepfake Detection-
2024arXivMulti-ModalSemantics-Oriented Multitask Learning for DeepFake Detection: A Joint Embedding Approach-
2024arXivSpace DomainOpen-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture-
2024arXivSpace DomainUniForensics: Face Forgery Detection via General Facial Representation-
2024arXivBenchmarkDF40: Toward Next-Generation Deepfake Detection-
2024arXivSpace DomainAdversarial Magnification to Deceive Deepfake Detection through Super ResolutionCode
2024arXivSpace DomainIn Anticipation of Perfect Deepfake: Identity-anchored Artifact-agnostic Detection under Rebalanced Deepfake Detection ProtocolCode
2024arXivTime DomainLips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakesCode
2024arXivFrequency DomainFreqBlender: Enhancing DeepFake Detection by Blending Frequency Knowledge-
2024arXivSpace DomainMoE-FFD: Mixture of Experts for Generalized and Parameter-Efficient Face Forgery Detection-
2024arXivMulti-ModalTowards More General Video-based Deepfake Detection through Facial Feature Guided Adaptation for Foundation Model-
2024arXivData DrivenD3: Scaling Up Deepfake Detection by Learning from Discrepancy-
2024arXivSpace DomainBand-Attention Modulated RetNet for Face Forgery Detection-
2024arXivSpace DomainDiffusion Facial Forgery Detection-
2024arXivSpace DomainMasked Conditional Diffusion Model for Enhancing Deepfake Detection-
2024CVPRSpace DomainRethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake DetectionCode
2024CVPRSpace DomainPUDD: Towards Robust Multi-modal Prototype-based Deepfake Detection-
2024CVPRSpace DomainFaster Than Lies: Real-time Deepfake Detection using Binary Neural NetworksCode
2024CVPRSpace DomainExploiting Style Latent Flows for Generalizing Deepfake Video Detection-
2024CVPRMulti-ModalAVFF: Audio-Visual Feature Fusion for Video Deepfake Detection-
2024CVPRSpace DomainLAA-Net: Localized Artifact Attention Network for Quality-Agnostic and Generalizable Deepfake DetectionCode
2024CVPRTime DomainTemporal Surface Frame Anomalies for Deepfake Video Detection-
2024IJCVFrequency DomainTest-time Forgery Detection with Spatial-Frequency Prompt Learning-
2024IJCVFrequency DomainWATCHER: Wavelet-Guided Texture-Content Hierarchical Relation Learning for Deepfake Detection-
2024IJCVFrequency DomainSA<sup>3</sup>WT: Adaptive Wavelet-Based Transformer with Self-Paced Auto Augmentation for Face Forgery Detection-
2024ICMESpace DomainCounterfactual Explanations for Face Forgery Detection via Adversarial Removal of ArtifactsCode
2024TPAMIMulti-ModalDetecting and Grounding Multi-Modal Media Manipulation and BeyondCode
2024TMMSpace DomainIEIRNet: Inconsistency Exploiting Based Identity Rectification for Face Forgery Detection-
2024ICASSPMulti-ModalExploiting Modality-Specific Features for Multi-Modal Manipulation Detection and Grounding-
2024ICASSPSpace DomainSelective Domain-Invariant Feature for Generalizable Deepfake Detection-
2024ICASSPData DrivenAdapter-Based Incremental Learning for Face Forgery Detection-
2024MMMSpace DomainFace Forgery Detection via Texture and Saliency Enhancement-
2024MMMSpace DomainAdapting Pretrained Large-Scale Vision Models for Face Forgery Detection-
2024TIFSOtherImproving Generalization of Deepfake Detectors by Imposing Gradient Regularization-
2024TIFSSpace DomainLearning to Discover Forgery Cues for Face Forgery Detection-
2024TIFSTime DomainWhere Deepfakes Gaze at? Spatial-Temporal Gaze Inconsistency Analysis for Video Face Forgery DetectionCode
2024IJCVTime DomainLearning Spatiotemporal Inconsistency via Thumbnail Layout for Face Deepfake DetectionCode
2024NAACLTime DomainHeterogeneity over Homogeneity: Investigating Multilingual Speech Pre-Trained Models for Detecting Audio Deepfake-
2024CVPRTime DomainExploiting Style Latent Flows for Generalizing Deepfake Detection Video Detection-
2024AAAIFrequency DomainFrequency-Aware Deepfake Detection: Improving Generalizability through Frequency Space Domain LearningCode
2024AAAISpace DomainExposing the Deception: Uncovering More Forgery Clues for Deepfake DetectionCode
2024WACVSpace DomainDeepfake Detection by Exploiting Surface Anomalies: The SurFake Approach-
2024WACVTime DomainVideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and Forensic TracesCode
2024WACVSpace DomainWeakly-supervised deepfake localization in diffusion-generated imagesCode
2023arXivTime DomainAV-Lip-Sync+: Leveraging AV-HuBERT to Exploit Multimodal Inconsistency for Video Deepfake Detection-
2023CVPRData DrivenImplicit Identity Driven Deepfake Face Swapping Detection-
2023TMMData DrivenNarrowing Domain Gaps with Bridging Samples for Generalized Face Forgery Detection-
2023CVPRData DrivenHierarchical Fine-Grained Image Forgery Detection and LocalizationCode
2023CVPRTime DomainLearning on Gradients: Generalized Artifacts Representation for GAN-Generated Images DetectionCode
2023ICCVData DrivenTowards Generic Image Manipulation Detection with Weakly-Supervised Self-Consistency Learning-
2023ICCVData DrivenQuality-Agnostic Deepfake Detection with Intra-model Collaborative Learning-
2023TIFSFrequency DomainConstructing New Backbone Networks via Space-Frequency Interactive Convolution for Deepfake DetectionCode
2023ICCVData DrivenControllable Guide-Space for Generalizable Face Forgery Detection-
2023AAAISpace DomainNoise Based Deepfake Detection via Multi-Head Relative-Interaction-
2023TIFSTime DomainDynamic Difference Learning With Spatio–Temporal Correlation for Deepfake Video Detection-
2023TIFSTime DomainMasked Relation Learning for DeepFake DetectionCode
2023CVPRTime DomainAudio-Visual Person-of-Interest DeepFake DetectionCode
2023CVPRTime DomainSelf-Supervised Video Forensics by Audio-Visual Anomaly DetectionCode
2023Applied Soft ComputingTime DomainAVFakeNet: A unified end-to-end Dense Swin Transformer deep learning model for audio–visual​ deepfakes detection-
2023TCSVTTime DomainPVASS-MDD: Predictive Visual-audio Alignment Self-supervision for Multimodal Deepfake Detection-
2023TIFSTime DomainAVoiD-DF: Audio-Visual Joint Learning for Detecting Deepfake-
2023TIFSSpace DomainBeyond the Prior Forgery Knowledge: Mining Critical Clues for General Face Forgery DetectionCode
2022TIFSSpace DomainFakeLocator: Robust Localization of GAN-Based Face Manipulations-
2022CVPRSpace DomainDetecting Deepfakes with Self-Blended ImagesCode
2022CVPRSpace DomainEnd-to-End Reconstruction-Classification Learning for Face Forgery DetectionCode
2022ECCVSpace DomainExplaining Deepfake Detection by Analysing Image Matching-
2022TIFSFrequency DomainHierarchical Frequency-Assisted Interactive Networks for Face Manipulation Detection-
2022ICMRTime DomainM2TR: Multi-modal Multi-scale Transformers for Deepfake DetectionCode
2022AAAITime DomainDelving into the Local: Dynamic Inconsistency Learning for DeepFake Video Detection-
2022CVPRTime DomainLeveraging Real Talking Faces via Self-Supervision for Robust Forgery DetectionCode
2022AAAIData DrivenFInfer: Frame Inference-Based Deepfake Detection for High-Visual-Quality Videos-
2021CVPRSpace DomainMulti-attentional Deepfake DetectionCode
2021TPAMISpace DomainDeepFake Detection Based on Discrepancies Between Faces and their Context-
2021ICCVData DrivenLearning Self-Consistency for Deepfake Detection-
2021CVPRFrequency DomainFrequency-aware Discriminative Feature Learning Supervised by Single-Center Loss for Face Forgery Detection-
2021ICCVTime DomainExploring Temporal Coherence for More General Video Face Forgery DetectionCode
2021CVPRTime DomainLips Don’t Lie: A Generalisable and Robust Approach to Face Forgery DetectionCode
2021CVPRTime DomainDetecting Deep-Fake Videos from Aural and Oral Dynamics-
2020IJCAIData DrivenFakeSpotter: A Simple yet Robust Baseline for Spotting AI-Synthesized Fake Faces-
2020CVPRSpace DomainGlobal Texture Enhancement for Fake Face Detection in the WildCode
2020CVPRData DrivenOn the Detection of Digital Face ManipulationCode
2020Signal ProcessingSpace DomainIdentification of Deep Network Generated Images Using Disparities in Color ComponentsCode
2020CVPRSpace DomainFace X-ray for More General Face Forgery Detection-
2020ICMLFrequency DomainLeveraging Frequency Analysis for Deep Fake Image RecognitionCode
2020ECCVFrequency DomainThinking in Frequency: Face Forgery Detection by Mining Frequency-aware Clues-
2020ECCVFrequency DomainTwo-Branch Recurrent Network for Isolating Deepfakes in Videos-
2020ECCVSpace DomainWhat makes fake images detectable? Understanding properties that generalizeCode
2019ICIPSpace DomainDetection of Fake Images Via The Ensemble of Deep Representations from Multi Color Spaces-
2019ICIPSpace DomainDetecting GAN-Generated Imagery Using Saturation CuesCode
2019ICCVData DrivenAttributing Fake Images to GANs: Learning and Analyzing GAN FingerprintsCode
2019CVPRWSpace DomainExposing DeepFake Videos By Detecting Face Warping ArtifactsCode
2019ICASSPTime DomainExposing deep fakes using inconsistent head poses-
2019ICASSPSpace DomainCapsule-forensics: Using Capsule Networks to Detect Forged Images and VideosCode
2018WIFSData DrivenIn Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye BlinkingCode

Related Research Domains

Face Super-resolution

YearVenuePaper TitleCode
2024arXivTowards Real-world Video Face Restoration: A New BenchmarkCode
2024arXivEfficient Diffusion Model for Image Restoration by Residual ShiftingCode
2024arXivDiffBIR: Towards Blind Image Restoration with Generative Diffusion PriorCode
2024CVPRPFStorer: Personalized Face Restoration and Super-Resolution-
2024AAAIResDiff: Combining CNN and Diffusion Model for Image Super-ResolutionCode
2024AAAILow-Light Face Super-resolution via Illumination, Structure, and Texture Associated RepresentationCode
2024AAAISkipDiff: Adaptive Skip Diffusion Model for High-Fidelity Perceptual Image Super-resolution-
2024WACVArbitrary-Resolution and Arbitrary-Scale Face Super-Resolution With Implicit Representation Networks-
2024ICASSPAdaptive Super Resolution for One-Shot Talking-Head GenerationCode
2023CVPRSpatial-Frequency Mutual Learning for Face Super-ResolutionCode
2023TIPCTCNet: A CNN-Transformer Cooperation Network for Face Image Super-ResolutionCode
2023TIPSemi-Cycled Generative Adversarial Networks for Real-World Face Super-ResolutionCode
2023TMMAn Efficient Latent Style Guided Transformer-CNN Framework for Face Super-ResolutionCode
2023TMMExploiting Multi-Scale Parallel Self-Attention and Local Variation via Dual-Branch Transformer-CNN Structure for Face Super-Resolution-
2023NNSelf-attention learning network for face super-resolution-
2023PRA Composite Network Model for Face Super-Resolution with Multi-Order Head Attention Facial Priors-
2022CVPRGCFSR: A Generative and Controllable Face Super Resolution Method Without Facial and GAN PriorsCode
2022ECCVFrom Face to Natural Image: Learning Real Degradation for Blind Image Super-ResolutionCode
2022TCSVTPropagating Facial Prior Knowledge for Multitask Learning in Face Super-ResolutionCode
2022NNMulti-level landmark-guided deep network for face super-resolutionCode

Portrait Style Transfer

YearVenuePaper TitleCode
2024arXivToonAging: Face Re-Aging upon Artistic Portrait Style Transfer-
2024arXivCtlGAN: Few-shot Artistic Portraits Generation with Contrastive Transfer Learning-
2024DisplaysHiStyle: Reinventing historic portraits via 3D generative model-
2024ICASSPA Framework for Portrait Stylization with Skin-Tone Awareness and Nudity Identification-
2024ICASSPLearning Discriminative Style Representations for Unsupervised and Few-Shot Artistic Portrait Drawing GenerationCode
2024TMMTowards High-Quality Photorealistic Image Style Transfer-
2024TMMFaceRefiner: High-Fidelity Facial Texture Refinement with Differentiable Rendering-based Style Transfer-
2024CVPRDeformable One-shot Face Stylization via DINO Semantic GuidancenCode
2024AAAIMagiCapture: High-Resolution Multi-Concept Portrait Customization-
2024AAAIArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and Implicit Style Prompt BankCode
2024TNNLSFew-Shot Face Stylization via GAN Prior Distillation-
2023arXivPP-GAN : Style Transfer from Korean Portraits to ID Photos Using Landmark Extractor with GAN-
2023TNNLSUnpaired Artistic Portrait Style Transfer via Asymmetric Double-Stream GAN-
2023CVPRInversion-Based Style Transfer With Diffusion ModelsCode
2023ICCVGeneral Image-to-Image Translation with One-Shot Image GuidanceCode
2023ACM TOGA Unified Arbitrary Style Transfer Framework via Adaptive Contrastive LearningCode
2023NeurocomputingCaster: Cartoon style transfer via dynamic cartoon style casting-
2023IJCVLearning Portrait Drawing with Unsupervised Parts-
2022CVPRPastiche Master: Exemplar-Based High-Resolution Portrait Style TransferCode
2022ACM TOGVToonify: Controllable High-Resolution Portrait Video Style TransferCode
2022ACM TOGDCT-net: domain-calibrated translation for portrait stylizationCode
2022ACM TOGSofGAN: A Portrait Image Generator with Dynamic Styling-

Body Animation

YearVenuePaper TitleCode
2024arXivLarge Motion Model for Unified Multi-Modal Motion GenerationCode
2024arXivChamp: Controllable and Consistent Human Image Animation with 3D Parametric GuidanceCode
2024AAAIPTUS: Photo-Realistic Talking Upper-Body Synthesis via 3D-Aware Motion Decomposition WarpingCode
2024CVPREmotional Speech-driven 3D Body Animation via Disentangled Latent DiffusionCode
2024CVPRDISCO: Disentangled Control for Realistic Human Dance GenerationCode
2024CVPRMagicAnimate: Temporally Consistent Human Image Animation using Diffusion ModelCode
2024CVPRGaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D GaussiansCode
2023arXivTADA! Text to Animatable Digital AvatarsCode
2023WACVPhysically Plausible Animation of Human Upper Body From a Single Image-
2023ICCVTowards Multi-Layered 3D Garments AnimationCode
2023ICCVMake-An-Animation: Large-Scale Text-conditional 3D Human Motion GenerationCode
2023CVPRLearning anchor transformations for 3d garment animation-
2022IJCAIText/Speech-Driven Full-Body AnimationCode
2022SIGGRAPHCapturing and Animation of Body and Clothing from Monocular Video-
2022NeurIPSCageNeRF: Cage-based Neural Radiance Field for Generalized 3D Deformation and AnimationCode

Makeup Transfer

YearVenuePaper TitleCode
2024arXivGorgeous: Create Your Desired Character Facial Makeup from Any IdeasCode
2024arXivToward Tiny and High-quality Facial Makeup with Data Amplify LearningCode
2024arXivStable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model-
2024CVPRMakeup Prior Models for 3D Facial Makeup Estimation and ApplicationsCode
2024ESWAISFB-GAN: Interpretable semantic face beautification with generative adversarial network-
2024TVCGMuNeRF: Robust Makeup Transfer in Neural Radiance Fields-
2024ICASSPSkin tone disentanglement in 2D makeup transfer with graph neural networks-
2024WACVLipAT: Beyond Style Transfer for Controllable Neural Simulation of Lipstick Using Cosmetic Attributes-
2023arXivSARA: Controllable Makeup Transfer with Spatial Alignment and Region-Adaptive Normalization-
2023TNNLSSSAT++: A Semantic-Aware and Versatile Makeup Transfer Network With Local Color Consistency ConstraintCode
2023CVPRBeautyREC: Robust, Efficient, and Component-Specific Makeup TransferCode
2023TCSVTHybrid Transformers with Attention-guided Spatial Embeddings for Makeup Transfer and Removal-
2022ICCVEleGANt: Exquisite and Locally Editable GAN for Makeup TransferCode
2022AAAISSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer and RemovalCode
2022Knowledge-Based SystemsTSEV-GAN: Generative Adversarial Networks with Target-aware Style Encoding and Verification for facial makeup transfer-
2022Knowledge-Based SystemsCUMTGAN: An instance-level controllable U-Net GAN for facial makeup transfer-
2021CVPRLipstick ain’t enough: beyond color matching for in-the-wild makeupCode
2021T-PAMIPsgan++: Robust detail-preserving makeup transfer and removalCode
2020CVPRPSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup TransferCode
2019CVPRBeautyglow : On-demand makeup transfer framework with reversible generative networkCode
2019ICCVLadn: Local adversarial disentangling network for facial makeup and de-makeupCode
2018ACM MMBeautyGAN: Instance-level Facial Makeup Transfer with Deep Generative Adversarial NetworkCode
2018CVPRPairedcyclegan: Asymmetric style transfer for applying and removing makeup-
2017AAAIExamples-rules guided deep neural network for makeup recommendation-

Cite The Survey

If you find our survey and repository useful for your research project, please consider citing our paper:

@article{pei2024deepfake,
  title={Deepfake generation and detection: A benchmark and survey},
  author={Pei, Gan and Zhang, Jiangning and Hu, Menghan and Zhai, Guangtao and Wang, Chengjie and Zhang, Zhenyu and Yang, Jian and Shen, Chunhua and Tao, Dacheng},
  journal={arXiv preprint arXiv:2403.17881},
  year={2024}
}

Contact

51265904018@stu.ecnu.edu.cn
186368@zju.edu.cn