Awesome
<h1 align="center">:fire: FLAME Universe :fire:</h1>This repository presents a list of publicly available ressources such as code, datasets, and scientific papers for the :fire: FLAME :fire: 3D head model. We aim at keeping the list up to date. You are invited to add missing FLAME-based ressources (publications, code repositories, datasets) either in the discussions or in a pull request.
<p> <p align="center"> <img src="gifs/collection.png"> </p> </p> <h2 align="center">:fire: FLAME :fire:</h2> <details> <summary>Never heard of FLAME?</summary> <p align="center"> <img src="gifs/model_variations.gif"> </p>FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the scientific publication. FLAME is publicly available under Creative Commons Attribution license.
To download the FLAME model, sign up under MPI-IS/FLAME and agree to the model license. Then you can download FLAME and other FLAME-related resources such as landmark embeddings, segmentation masks, quad template mesh, etc., from MPI-IS/FLAME/download. You can also download the model with a bash script such as fetch_FLAME.
</details> <h2 align="center">Code</h2> <details open> <summary>List of public repositories that use FLAME (alphabetical order).</summary>- BFM_to_FLAME: Conversion from Basel Face Model (BFM) to FLAME.
- CVTHead: Controllable head avatar generation from a single image.
- DECA: Reconstruction of 3D faces with animatable facial expression detail from a single image.
- DiffPoseTalk: Speech-driven stylistic 3D facial animation.
- diffusion-rig: Personalized model to edit facial expressions, head pose, and lighting in portrait images.
- EMOCA: Reconstruction of emotional 3D faces from a single image.
- EMOTE: Emotional speech-driven 3D face animation.
- expgan: Face image generation with expression control.
- FaceFormer: Speech-driven facial animation of meshes in FLAME mesh topology.
- FLAME-Blender-Add-on: FLAME Blender Add-on.
- flame-fitting: Fitting of FLAME to scans.
- flame-head-tracker: FLAMe-based monocular video tracking.
- FLAME_PyTorch: FLAME PyTorch layer.
- GANHead: Animatable neural head avatar.
- GaussianAvatars: Photorealistic head avatars with FLAME-rigged 3D Gaussians.
- GPAvatar: Prediction of controllable 3D head avatars from one or several images.
- GIF: Generating face images with FLAME parameter control.
- INSTA: Volumetric head avatars from videos in less than 10 minutes.
- INSTA-pytorch: Volumetric head avatars from videos in less than 10 minutes (PyTorch).
- learning2listen: Modeling interactional communication in dyadic conversations.
- LightAvatar-TensorFlow: Use of neural light field (NeLF) to build photorealistic 3D head avatars.
- MICA: Reconstruction of metrically accurated 3D faces from a single image.
- MeGA: Reconstruction of an editable hybrid mesh-Gaussian head avatar.
- metrical-tracker: Metrical face tracker for monocular videos.
- MultiTalk: Speech-driven facial animation of meshes in FLAME mesh topology.
- NED: Facial expression of emotion manipulation in videos.
- Next3D: 3D generative model with FLAME parameter control.
- NeuralHaircut: Creation of strand-based hairstyle from single-view or multi-view videos.
- neural-head-avatars: Building a neural head avatar from video sequences.
- NeRSemble: Building a neural head avatar from multi-view video data.
- photometric_optimization: Fitting of FLAME to images using differentiable rendering.-
- RingNet: Reconstruction of 3D faces from a single image.
- ROME: Creation of personalized avatar from a single image.
- SAFA: Animation of face images.
- Semantify: Semantic control over 3DMM parameters.
- SPECTRE: Speech-aware 3D face reconstruction from images.
- SplattingAvatar: Real-time human avatars with mesh-embedded Gaussian splatting.
- SMIRK: Reconstruction of emotional 3D faces from a single image.
- TRUST: Racially unbiased skin tone extimation from images.
- TF_FLAME: Fit FLAME to 2D/3D landmarks, FLAME meshes, or sample textured meshes.
- video-head-tracker: Track 3D heads in video sequences.
- VOCA: Speech-driven facial animation of meshes in FLAME mesh topology.
- BP4D+: 127 subjects, one neutral expression mesh each.
- CoMA dataset: 12 subjects, 12 extreme dynamic expressions each.
- D3DFACS: 10 subjects, 519 dynamic expressions in total.
- Decaf dataset: Deformation capture for fance and hand interactions.
- FaceWarehouse: 150 subjects, one neutral expression mesh each.
- FaMoS: 95 subjects, 28 dynamic expressions and head poses each, about 600K frames in total.
- Florence 2D/3D: 53 subjects, one neutral expression mesh each.
- FRGC: 531 subjects, one neutral expression mesh each.
- LYHM: 1216 subjects, one neutral expression mesh each.
- MEAD reconstructions: 3D face reconstructions for MEAD (emotional talking-face dataset).
- NeRSemble dataset: 10 sequences of multi-view images and 3D faces in FLAME mesh topology.
- Stirling: 133 subjects, one neutral expression mesh each.
- VOCASET: 12 subjects, 40 speech sequences each with synchronized audio.
2025
2024
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations.
- VGG-Tex: A Vivid Geometry-Guided Facial Texture Estimation Model for High Fidelity Monocular 3D Face Reconstruction.
- ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE.
- DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation.
- Gaussian Eigen Models for Human Heads.
- FAGhead: Fully Animate Gaussian Head from Monocular Videos.
- GGHead: Fast and Generalizable 3D Gaussian Heads.
- Rig3DGS: Creating Controllable Portraits from Casual Monocular Videos.
- Generalizable and Animatable Gaussian Head Avatar (NeurIPS 2024).
- SPARK: Self-supervised Personalized Real-time Monocular Face Capture (SIGGRAPH Asia 2024).
- MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Dataset (INTERSPEECH 2024).
- GPAvatar: Generalizable and Precise Head Avatars from Image(S) (ICLR 2024).
- LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field (ECCV-W 2024).
- Stable Video Portraits (ECCV 2024).
- GRAPE: Generalizable and Robust Multi-view Facial Capture (ECCV 2024.
- Human Hair Reconstruction with Strand-Aligned 3D Gaussians (ECCV 2024).
- PAV: Personalized Head Avatar from Unstructured Video Collection (ECCV 2024).
- GAUSSIAN3DIFF: 3D Gaussian Diffusion for 3D Full Head Synthesis and Editing (ECCV 2024).
- MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing (ECCV 2024).
- HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting (ECCV 2024).
- HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting (ECCV 2024).
- MonoGaussianAvatar: Monocular Gaussian Point-based Head Avatar (SIGGRAPH 2024).
- DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models (SIGGRAPH 2024).
- UltrAvatar: A Realistic Animatable 3D Avatar Diffusion Model with Authenticity Guided Textures (CVPR 2024).
- FlashAvatar: High-fidelity Head Avatar with Efficient Gaussian Embedding (CVPR 2024).
- Portrait4D: Learning One-Shot 4D Head Avatar Synthesis using Synthetic Data (CVPR 2024).
- SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting (CVPR 2024).
- 3D Facial Expressions through Analysis-by-Neural-Synthesis (CVPR 2024).
- GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians (CVPR 2024).
- FaceComposer: A Unified Model for Versatile Facial Content Creation (NeurIPS 2023).
- Feel the Bite: Robot-Assisted Inside-Mouth Bite Transfer using Robust Mouth Perception and Physical Interaction-Aware Control (HRI 2024).
- ReliTalk: Relightable Talking Portrait Generation from a Single Video (IJCV 2024).
- Audio-Driven Speech Animation with Text-Guided Expression (EG 2024).
- CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer (WACV 2024).
- AU-Aware Dynamic 3D Face Reconstruction from Videos with Transformer (WACV 2024).
- Towards Realistic Generative 3D Face Models (WACV 2024).
- LaughTalk: Expressive 3D Talking Head Generation with Laughter (WACV 2024).
- NeRFlame: FLAME-based conditioning of NeRF for 3D face rendering (ICCS 2024).
2023
- A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization.
- DF-3DFace: One-to-Many Speech Synchronized 3D Face Animation with Diffusion
- HeadCraft: Modeling High-Detail Shape Variations for Animated 3DMMs.
- 3DiFACE: Diffusion-based Speech-driven 3D Facial Animation and Editing.
- Articulated 3D Head Avatar Generation using Text-to-Image Diffusion Models.
- Fake It Without Making It: Conditioned Face Generation for Accurate 3D Face.
- Text2Face: A Multi-Modal 3D Face Model.
- SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces (ACM-MM 2023).
- Expressive Speech-driven Facial Animation with controllable emotions (ICMEW 2023).
- A Perceptual Shape Loss for Monocular 3D Face Reconstruction (Pacific Graphics 2023).
- FLARE: Fast Learning of Animatable and Relightable Mesh Avatars (SIGGRAPH Asia 2023).
- Emotional Speech-Driven Animation with Content-Emotion Disentanglement (SIGGRAPH Asia 2023).
- Decaf: Monocular Deformation Capture for Face and Hand Interactions (SIGGRAPH Asia 2023).
- Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction (ICCV 2023).
- Can Language Models Learn to Listen? (ICCV 2023).
- Accurate 3D Face Reconstruction with Facial Component Tokens (ICCV 2023)
- Speech4Mesh: Speech-Assisted Monocular 3D Facial Reconstruction for Speech-Driven 3D Facial Animation (ICCV 2023).
- Semantify: Simplifying the Control of 3D Morphable Models using CLIP (ICCV 2023).
- Imitator: Personalized Speech-driven 3D Facial Animation (ICCV 2023).
- NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads (SIGGRAPH 2023).
- ClipFace: Text-guided Editing of Textured 3D Morphable Models (SIGGRAPH 2023).
- GANHead: Towards Generative Animatable Neural Head Avatars (CVPR 2023).
- Implicit Neural Head Synthesis via Controllable Local Deformation Fields (CVPR 2023).
- DiffusionRig: Learning Personalized Priors for Facial Appearance Editing (CVPR 2023).
- High-Res Facial Appearance Capture from Polarized Smartphone Images (CVPR 2023).
- Instant Volumetric Head Avatars (CVPR 2023).
- Learning Personalized High Quality Volumetric Head Avatars (CVPR 2023).
- Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars (CVPR 2023).
- PointAvatar: Deformable Point-based Head Avatars from Videos (CVPR 2023).
- Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos (CVPR-W 2023).
- Scaling Neural Face Synthesis to High FPS and Low Latency by Neural Caching (WACV 2023).
2022
- TeleViewDemo: Experience the Future of 3D Teleconferencing (SIGGRAPH Asia 2022).
- Realistic One-shot Mesh-based Head Avatars (ECCV 2022).
- Towards Metrical Reconstruction of Human Faces (ECCV 2022).
- Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation (ECCV 2022).
- Generative Neural Articulated Radiance Fields (NeurIPS 2022).
- EMOCA: Emotion Driven Monocular Face Capture and Animation (CVPR 2022).
- Generating Diverse 3D Reconstructions from a Single Occluded Face Image (CVPR 2022).
- I M Avatar: Implicit Morphable Head Avatars from Videos (CVPR 2022).
- Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022).
- Neural Emotion Director: Speech-preserving semantic control of facial expressions in “in-the-wild” videos (CVPR 2022).
- Neural head avatars from monocular RGB videos (CVPR 2022).
- RigNeRF: Fully Controllable Neural 3D Portraits (CVPR 2022).
- Simulated Adversarial Testing of Face Recognition Models (CVPR 2022).
- Accurate 3D Hand Pose Estimation for Whole-Body 3D Human Mesh Estimation (CVPR-W 2022).
- MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation (AAAI 2022).
- Exp-GAN: 3D-Aware Facial Image Generation with Expression Control (ACCV 2022).
2021
- Data-Driven 3D Neck Modeling and Animation (TVCG 2021).
- MorphGAN: One-Shot Face Synthesis GAN for Detecting Recognition Bias (BMVC 2021).
- SIDER : Single-Image Neural Optimization for Facial Geometric Detail Recovery (3DV 2021).
- SAFA: Structure Aware Face Animation (3DV 2021).
- Learning an Animatable Detailed 3D Face Model from In-The-Wild Images (SIGGRAPH 2021).
2020
- Monocular Expressive Body Regression through Body-Driven Attention (ECCV 2020).
- GIF: Generative Interpretable Faces (3DV 2020).