Awesome
Awesome Dataset Distillation
<img src="https://img.shields.io/badge/Contributions-Welcome-278ea5" alt="Contrib"/> <img src="https://img.shields.io/badge/Number%20of%20Items-209-FF6F00" alt="PaperNum"/>
Awesome Dataset Distillation provides the most comprehensive and detailed information on the Dataset Distillation field.
Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.). This task was first introduced in the paper Dataset Distillation [Tongzhou Wang et al., '18], along with a proposed algorithm using backpropagation through optimization steps. Then the task was first extended to the real-world datasets in the paper Medical Dataset Distillation [Guang Li et al., '19], which also explored the privacy preservation possibilities of dataset distillation. In the paper Dataset Condensation [Bo Zhao et al., '20], gradient matching was first introduced and greatly promoted the development of the dataset distillation field.
In recent years (2022-now), dataset distillation has gained increasing attention in the research community, across many institutes and labs. More papers are now being published each year. These wonderful researches have been constantly improving dataset distillation and exploring its various variants and applications.
This project is curated and maintained by Guang Li, Bo Zhao, and Tongzhou Wang.
<img src="./images/logo.jpg" width="20%"/>How to submit a pull request?
- :globe_with_meridians: Project Page
- :octocat: Code
- :book:
bibtex
Latest Updates
- [2024/12/13] Audio-Visual Dataset Distillation (Saksham Singh Kushwaha et al., TMLR 2024) :octocat: :book:
- [2024/12/10] FairDD: Fair Dataset Distillation via Synchronized Matching (Qihang Zhou et al., 2024) :book:
- [2024/12/07] Video Set Distillation: Information Diversification and Temporal Densifica (Yinjie Zhao et al., 2024) :book:
- [2024/12/06] DELT: A Simple Diversity-driven EarlyLate Training for Dataset Distillation (Zhiqiang Shen & Ammar Sherif et al., 2024) :octocat: :book:
- [2024/11/29] Dataset Distillers Are Good Label Denoisers In the Wild (Lechao Cheng et al., 2024) :octocat: :book:
- [2024/11/29] Textual Dataset Distillation via Language Model Embedding (Yefan Tao et al., EMNLP 2024) :book:
- [2024/11/17] BEARD: Benchmarking the Adversarial Robustness for Dataset Distillation (Zheng Zhou et al., 2024) :globe_with_meridians: :octocat: :book:
- [2024/11/10] Fetch and Forge: Efficient Dataset Condensation for Object Detection (Ding Qi et al., NeurIPS 2024) :book:
- [2024/11/10] Color-Oriented Redundancy Reduction in Dataset Distillation (Bowen Yuan et al., NeurIPS 2024) :octocat: :book:
- [2024/11/10] Provable and Efficient Dataset Distillation for Kernel Ridge Regression (Yilan Chen et al., NeurIPS 2024) :book:
Contents
- Main
- Early Work
- Gradient/Trajectory Matching Surrogate Objective
- Distribution/Feature Matching Surrogate Objective
- Kernel-Based Distillation
- Distilled Dataset Parametrization
- Generative Distillation
- Better Optimization
- Better Understanding
- Label Distillation
- Dataset Quantization
- Decoupled Distillation
- Multimodal Distillation
- Self-Supervised Distillation
- Benchmark
- Survey
- Ph.D. Thesis
- Workshop
- Challenge
- Applications
- Continual Learning
- Privacy
- Medical
- Federated Learning
- Graph Neural Network
- Neural Architecture Search
- Fashion, Art, and Design
- Recommender Systems
- Blackbox Optimization
- Robustness
- Fairness
- Text
- Tabular
- Retrieval
- Video
- Domain Adaptation
- Super Resolution
- Time Series
- Speech
- Machine Unlearning
- Reinforcement Learning
- Long-Tail
- Learning with Noisy Labels
- Object Detection <a name="main" />
Main
- Dataset Distillation (Tongzhou Wang et al., 2018) :globe_with_meridians: :octocat: :book:
Early Work
- Gradient-Based Hyperparameter Optimization Through Reversible Learning (Dougal Maclaurin et al., ICML 2015) :octocat: :book:
Gradient/Trajectory Matching Surrogate Objective
- Dataset Condensation with Gradient Matching (Bo Zhao et al., ICLR 2021) :octocat: :book:
- Dataset Condensation with Differentiable Siamese Augmentation (Bo Zhao et al., ICML 2021) :octocat: :book:
- Dataset Distillation by Matching Training Trajectories (George Cazenavette et al., CVPR 2022) :globe_with_meridians: :octocat: :book:
- Dataset Condensation with Contrastive Signals (Saehyung Lee et al., ICML 2022) :octocat: :book:
- Loss-Curvature Matching for Dataset Selection and Condensation (Seungjae Shin & Heesun Bae et al., AISTATS 2023) :octocat: :book:
- Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation (Jiawei Du & Yidi Jiang et al., CVPR 2023) :octocat: :book:
- Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory (Justin Cui et al., ICML 2023) :octocat: :book:
- Sequential Subset Matching for Dataset Distillation (Jiawei Du et al., NeurIPS 2023) :octocat: :book:
- Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching (Ziyao Guo & Kai Wang et al., ICLR 2024) :globe_with_meridians: :octocat: :book:
- SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching (Yongmin Lee et al., ICML 2024) :octocat: :book:
- Dataset Distillation by Automatic Training Trajectories (Dai Liu et al., ECCV 2024) :octocat: :book:
- Neural Spectral Decomposition for Dataset Distillation (Shaolei Yang et al., ECCV 2024) :octocat: :book:
- Prioritize Alignment in Dataset Distillation (Zekai Li & Ziyao Guo et al., 2024) :octocat: :book:
- Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios (Kai Wang & Zekai Li et al., 2024) :octocat: :book:
Distribution/Feature Matching Surrogate Objective
- CAFE: Learning to Condense Dataset by Aligning Features (Kai Wang & Bo Zhao et al., CVPR 2022) :octocat: :book:
- Dataset Condensation with Distribution Matching (Bo Zhao et al., WACV 2023) :octocat: :book:
- Improved Distribution Matching for Dataset Condensation (Ganlong Zhao et al., CVPR 2023) :octocat: :book:
- DataDAM: Efficient Dataset Distillation with Attention Matching (Ahmad Sajedi & Samir Khaki et al., ICCV 2023) :globe_with_meridians: :octocat: :book:
- Dataset Distillation via the Wasserstein Metric (Haoyang Liu et al., 2023) :book:
- M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy (Hansong Zhang & Shikun Li et al., AAAI 2024) :octocat: :book:
- Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation (Wenxiao Deng et al., CVPR 2024) :octocat: :book:
- Dataset Condensation with Latent Quantile Matching (Wei Wei et al., CVPR 2024 Workshop) :book:
- DANCE: Dual-View Distribution Alignment for Dataset Condensation (Hansong Zhang et al., IJCAI 2024) :octocat: :book:
- Diversified Semantic Distribution Matching for Dataset Distillation (Hongcheng Li et al., MM 2024) :octocat: :book:
Kernel-Based Distillation
- Dataset Meta-Learning from Kernel Ridge-Regression (Timothy Nguyen et al., ICLR 2021) :octocat: :book:
- Dataset Distillation with Infinitely Wide Convolutional Networks (Timothy Nguyen et al., NeurIPS 2021) :octocat: :book:
- Dataset Distillation using Neural Feature Regression (Yongchao Zhou et al., NeurIPS 2022) :globe_with_meridians: :octocat: :book:
- Efficient Dataset Distillation using Random Feature Approximation (Noel Loo et al., NeurIPS 2022) :octocat: :book:
- Dataset Distillation with Convexified Implicit Gradients (Noel Loo et al., ICML 2023) :octocat: :book:
- Provable and Efficient Dataset Distillation for Kernel Ridge Regression (Yilan Chen et al., NeurIPS 2024) :book:
Distilled Dataset Parametrization
- Dataset Condensation via Efficient Synthetic-Data Parameterization (Jang-Hyun Kim et al., ICML 2022) :octocat: :book:
- Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks (Zhiwei Deng et al., NeurIPS 2022) :octocat: :book:
- On Divergence Measures for Bayesian Pseudocoresets (Balhae Kim et al., NeurIPS 2022) :octocat: :book:
- Dataset Distillation via Factorization (Songhua Liu et al., NeurIPS 2022) :octocat: :book:
- PRANC: Pseudo RAndom Networks for Compacting Deep Models (Parsa Nooralinejad et al., 2022) :octocat: :book:
- Dataset Condensation with Latent Space Knowledge Factorization and Sharing (Hae Beom Lee & Dong Bok Lee et al., 2022) :book:
- Slimmable Dataset Condensation (Songhua Liu et al., CVPR 2023) :book:
- Few-Shot Dataset Distillation via Translative Pre-Training (Songhua Liu et al., ICCV 2023) :book:
- MGDD: A Meta Generator for Fast Dataset Distillation (Songhua Liu et al., NeurIPS 2023) :book:
- Sparse Parameterization for Epitomic Dataset Distillation (Xing Wei & Anjia Cao et al., NeurIPS 2023) :octocat: :book:
- Frequency Domain-based Dataset Distillation (Donghyeok Shin & Seungjae Shin et al., NeurIPS 2023) :octocat: :book:
- Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation (Haizhong Zheng et al., ECCV 2024) :book:
- FYI: Flip Your Images for Dataset Distillation (Byunggwan Son et al., ECCV 2024) :globe_with_meridians: :octocat: :book:
- Color-Oriented Redundancy Reduction in Dataset Distillation (Bowen Yuan et al., NeurIPS 2024) :octocat: :book:
Generative Distillation
- Synthesizing Informative Training Samples with GAN (Bo Zhao et al., NeurIPS 2022 Workshop) :octocat: :book:
- Generalizing Dataset Distillation via Deep Generative Prior (George Cazenavette et al., CVPR 2023) :globe_with_meridians: :octocat: :book:
- DiM: Distilling Dataset into Generative Model (Kai Wang & Jianyang Gu et al., 2023) :octocat: :book:
- Dataset Condensation via Generative Model (Junhao Zhang et al., 2023) :book:
- Efficient Dataset Distillation via Minimax Diffusion (Jianyang Gu et al., CVPR 2024) :octocat: :book:
- D4M: Dataset Distillation via Disentangled Diffusion Model (Duo Su & Junjie Hou et al., CVPR 2024) :globe_with_meridians: :octocat: :book:
- Generative Dataset Distillation: Balancing Global Structure and Local Details (Longzhen Li & Guang Li et al., CVPR 2024 Workshop) :book:
- Data-to-Model Distillation: Data-Efficient Learning Framework (Ahmad Sajedi & Samir Khaki et al., ECCV 2024) :book:
- Generative Dataset Distillation Based on Diffusion Model (Duo Su & Junjie Hou & Guang Li et al., ECCV 2024 Workshop) :octocat: :book:
- Latent Dataset Distillation with Diffusion Models (Brian B. Moser & Federico Raue et al., 2024) :book:
- Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation (Xinhao Zhong & Hao Fang et al., 2024) :octocat: :book:
Better Optimization
- Accelerating Dataset Distillation via Model Augmentation (Lei Zhang & Jie Zhang et al., CVPR 2023) :octocat: :book:
- DREAM: Efficient Dataset Distillation by Representative Matching (Yanqing Liu & Jianyang Gu & Kai Wang et al., ICCV 2023) :octocat: :book:
- You Only Condense Once: Two Rules for Pruning Condensed Datasets (Yang He et al., NeurIPS 2023) :octocat: :book:
- MIM4DD: Mutual Information Maximization for Dataset Distillation (Yuzhang Shang et al., NeurIPS 2023) :book:
- Can Pre-Trained Models Assist in Dataset Distillation? (Yao Lu et al., 2023) :octocat: :book:
- DREAM+: Efficient Dataset Distillation by Bidirectional Representative Matching (Yanqing Liu & Jianyang Gu & Kai Wang et al., 2023) :octocat: :book:
- Dataset Distillation in Latent Space (Yuxuan Duan et al., 2023) :book:
- Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality (Xuxi Chen & Yu Yang et al., ICLR 2024) :octocat: :book:
- Embarassingly Simple Dataset Distillation (Yunzhen Feng et al., ICLR 2024) :octocat: :book:
- Multisize Dataset Condensation (Yang He et al., ICLR 2024) :octocat: :book:
- Large Scale Dataset Distillation with Domain Shift (Noel Loo & Alaa Maalouf et al., ICML 2024) :octocat: :book:
- Distill Gold from Massive Ores: Bi-level Data Pruning towards Efficient Dataset Distillation (Yue Xu et al., ECCV 2024) :octocat: :book:
- Towards Model-Agnostic Dataset Condensation by Heterogeneous Models (Jun-Yeong Moon et al., ECCV 2024) :octocat: :book:
- Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching (Ruonan Yu et al., ECCV 2024) :book:
- BACON: Bayesian Optimal Condensation Framework for Dataset Distillation (Zheng Zhou et al., 2024) :octocat: :book:
Better Understanding
- Optimizing Millions of Hyperparameters by Implicit Differentiation (Jonathan Lorraine et al., AISTATS 2020) :octocat: :book:
- On Implicit Bias in Overparameterized Bilevel Optimization (Paul Vicol et al., ICML 2022) :book:
- On the Size and Approximation Error of Distilled Sets (Alaa Maalouf & Murad Tukan et al., NeurIPS 2023) :book:
- A Theoretical Study of Dataset Distillation (Zachary Izzo et al., NeurIPS 2023 Workshop) :book:
- What is Dataset Distillation Learning? (William Yang et al., ICML 2024) :octocat: :book:
- Mitigating Bias in Dataset Distillation (Justin Cui et al., ICML 2024) :book:
- Dataset Distillation from First Principles: Integrating Core Information Extraction and Purposeful Learning (Vyacheslav Kungurtsev et al., 2024) :book:
- Not All Samples Should Be Utilized Equally: Towards Understanding and Improving Dataset Distillation (Shaobo Wang et al., 2024) :book:
Label Distillation
- Flexible Dataset Distillation: Learn Labels Instead of Images (Ondrej Bohdal et al., NeurIPS 2020 Workshop) :octocat: :book:
- Soft-Label Dataset Distillation and Text Dataset Distillation (Ilia Sucholutsky et al., IJCNN 2021) :octocat: :book:
- A Label is Worth a Thousand Images in Dataset Distillation (Tian Qin et al., NeurIPS 2024) :octocat: :book:
- Are Large-scale Soft Labels Necessary for Large-scale Dataset Distillation? (Lingao Xiao, et al., NeurIPS 2024) :octocat: :book:
- DRUPI: Dataset Reduction Using Privileged Information (Shaobo Wang et al., 2024) :book:
- Label-Augmented Dataset Distillation (Seoungyoon Kang & Youngsun Lim et al., WACV 2025) :book:
Dataset Quantization
- Dataset Quantization (Daquan Zhou & Kai Wang & Jianyang Gu et al., ICCV 2023) :octocat: :book:
- Dataset Quantization with Active Learning based Adaptive Sampling (Zhenghao Zhao et al., ECCV 2024) :octocat: :book:
Decoupled Distillation
- Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective (Zeyuan Yin & Zhiqiang Shen et al., NeurIPS 2023) :globe_with_meridians: :octocat: :book:
- Dataset Distillation via Curriculum Data Synthesis in Large Data Era (Zeyuan Yin et al., TMLR 2024) :octocat: :book:
- Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (Shitong Shao et al., CVPR 2024) :octocat: :book:
- On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm (Peng Sun et al., CVPR 2024) :octocat: :book:
- Information Compensation: A Fix for Any-scale Dataset Distillation (Peng Sun et al., ICLR 2024 Workshop) :book:
- Elucidating the Design Space of Dataset Condensation (Shitong Shao et al., NeurIPS 2024) :octocat: :book:
- Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment (Jiawei Du et al., NeurIPS 2024) :octocat: :book:
- Curriculum Dataset Distillation (Zhiheng Ma & Anjia Cao et al., 2024) :book:
- Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator (Xin Zhang et al., 2024) :book:
- DELT: A Simple Diversity-driven EarlyLate Training for Dataset Distillation (Zhiqiang Shen & Ammar Sherif et al., 2024) :octocat: :book:
Multimodal Distillation
- Vision-Language Dataset Distillation (Xindi Wu et al., TMLR 2024) :globe_with_meridians: :octocat: :book:
- Low-Rank Similarity Mining for Multimodal Dataset Distillation (Yue Xu et al., ICML 2024) :octocat: :book:
- Audio-Visual Dataset Distillation (Saksham Singh Kushwaha et al., TMLR 2024) :octocat: :book:
Self-Supervised Distillation
- Self-Supervised Dataset Distillation for Transfer Learning (Dong Bok Lee & Seanie Lee et al., ICLR 2024) :octocat: :book:
- Efficiency for Free: Ideal Data Are Transportable Representations (Peng Sun et al., NeurIPS 2024) :octocat: :book:
- Self-supervised Dataset Distillation: A Good Compression Is All You Need (Muxin Zhou et al., 2024) :octocat: :book:
Benchmark
- DC-BENCH: Dataset Condensation Benchmark (Justin Cui et al., NeurIPS 2022) :globe_with_meridians: :octocat: :book:
- A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness (Zongxiong Chen & Jiahui Geng et al., 2023) :book:
- DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation (Yifan Wu et al., 2024) :octocat: :book:
- BEARD: Benchmarking the Adversarial Robustness for Dataset Distillation (Zheng Zhou et al., 2024) :globe_with_meridians: :octocat: :book:
Survey
- Data Distillation: A Survey (Noveen Sachdeva et al., TMLR 2023) :book:
- A Survey on Dataset Distillation: Approaches, Applications and Future Directions (Jiahui Geng & Zongxiong Chen et al., IJCAI 2023) :octocat: :book:
- A Comprehensive Survey to Dataset Distillation (Shiye Lei et al., TPAMI 2023) :octocat: :book:
- Dataset Distillation: A Comprehensive Review (Ruonan Yu & Songhua Liu et al., TPAMI 2023) :octocat: :book:
Ph.D. Thesis
- Data-efficient Neural Network Training with Dataset Condensation (Bo Zhao, The University of Edinburgh 2023) :book:
Workshop
- 1st CVPR Workshop on Dataset Distillation (Saeed Vahidian et al., CVPR 2024) :globe_with_meridians:
Challenge
- The First Dataset Distillation Challenge (Kai Wang & Ahmad Sajedi et al., ECCV 2024) :globe_with_meridians: :octocat:
Applications
<a name="continual" />Continual Learning
- Reducing Catastrophic Forgetting with Learning on Synthetic Data (Wojciech Masarczyk et al., CVPR 2020 Workshop) :book:
- Condensed Composite Memory Continual Learning (Felix Wiewel et al., IJCNN 2021) :octocat: :book:
- Distilled Replay: Overcoming Forgetting through Synthetic Samples (Andrea Rosasco et al., IJCAI 2021 Workshop) :octocat: :book:
- Sample Condensation in Online Continual Learning (Mattia Sangermano et al., IJCNN 2022) :octocat: :book:
- An Efficient Dataset Condensation Plugin and Its Application to Continual Learning (Enneng Yang et al., NeurIPS 2023) :octocat: :book:
- Summarizing Stream Data for Memory-Restricted Online Continual Learning (Jianyang Gu et al., AAAI 2024) :octocat: :book:
Privacy
- Privacy for Free: How does Dataset Condensation Help Privacy? (Tian Dong et al., ICML 2022) :book:
- Private Set Generation with Discriminative Information (Dingfan Chen et al., NeurIPS 2022) :octocat: :book:
- No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy" (Nicholas Carlini et al., 2022) :book:
- Backdoor Attacks Against Dataset Distillation (Yugeng Liu et al., NDSS 2023) :octocat: :book:
- Differentially Private Kernel Inducing Points (DP-KIP) for Privacy-preserving Data Distillation (Margarita Vinaroz et al., 2023) :octocat: :book:
- Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation (Noel Loo et al., ICLR 2024) :book:
- Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective (Ming-Yu Chung et al., ICLR 2024) :book:
- Differentially Private Dataset Condensation (Zheng et al., NDSS 2024 Workshop) :book:
- Adaptive Backdoor Attacks Against Dataset Distillation for Federated Learning (Ze Chai et al., ICC 2024) :book:
Medical
- Soft-Label Anonymous Gastric X-ray Image Distillation (Guang Li et al., ICIP 2020) :octocat: :book:
- Compressed Gastric Image Generation Based on Soft-Label Dataset Distillation for Medical Data Sharing (Guang Li et al., CMPB 2022) :octocat: :book:
- Dataset Distillation for Medical Dataset Sharing (Guang Li et al., AAAI 2023 Workshop) :octocat: :book:
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation (Yuchen Tian & Jiacheng Wang et al., MICCAI 2023 Workshop) :book:
- Importance-Aware Adaptive Dataset Distillation (Guang Li et al., NN 2024) :book:
- Image Distillation for Safe Data Sharing in Histopathology (Zhe Li et al., MICCAI 2024) :octocat: :book:
- MedSynth: Leveraging Generative Model for Healthcare Data Sharing (Renuga Kanagavelu et al., MICCAI 2024) :book:
- Progressive Trajectory Matching for Medical Dataset Distillation (Zhen Yu et al., 2024) :book:
- Dataset Distillation in Medical Imaging: A Feasibility Study (Muyang Li et al., 2024) :book:
- Dataset Distillation for Histopathology Image Classification (Cong Cong et al., 2024) :book:
Federated Learning
- Federated Learning via Synthetic Data (Jack Goetz et al., 2020) :book:
- Distilled One-Shot Federated Learning (Yanlin Zhou et al., 2020) :book:
- DENSE: Data-Free One-Shot Federated Learning (Jie Zhang & Chen Chen et al., NeurIPS 2022) :octocat: :book:
- FedSynth: Gradient Compression via Synthetic Data in Federated Learning (Shengyuan Hu et al., 2022) :book:
- Meta Knowledge Condensation for Federated Learning (Ping Liu et al., ICLR 2023) :book:
- DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics (Renjie Pi et al., CVPR 2023) :octocat: :book:
- FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning (Yuanhao Xiong & Ruochen Wang et al., CVPR 2023) :octocat: :book:
- Federated Learning via Decentralized Dataset Distillation in Resource-Constrained Edge Environments (Rui Song et al., IJCNN 2023) :octocat: :book:
- FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations (Hui-Po Wang et al., 2023) :octocat: :book:
- Federated Virtual Learning on Heterogeneous Data with Local-global Distillation (Chun-Yin Huang et al., 2023) :book:
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity (Yuan Wang et al., CVPR 2024) :book:
- Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors (Chun-Yin Huang et al., ICML 2024) :octocat: :book:
- DCFL: Non-IID Awareness Dataset Condensation Aided Federated Learning (Xingwang Wang et al., IJCNN 2024) :book:
- Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents (Yuqi Jia & Saeed Vahidian et al., ECCV 2024) :octocat: :book:
- One-Shot Collaborative Data Distillation (William Holland et al., ECAI 2024) :octocat: :book:
Graph Neural Network
- Graph Condensation for Graph Neural Networks (Wei Jin et al., ICLR 2022) :octocat: :book:
- Condensing Graphs via One-Step Gradient Matching (Wei Jin et al., KDD 2022) :octocat: :book:
- Graph Condensation via Receptive Field Distribution Matching (Mengyang Liu et al., 2022) :book:
- Kernel Ridge Regression-Based Graph Dataset Distillation (Zhe Xu et al., KDD 2023) :octocat: :book:
- Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data (Xin Zheng et al., NeurIPS 2023) :octocat: :book:
- Does Graph Distillation See Like Vision Dataset Counterpart? (Beining Yang & Kai Wang et al., NeurIPS 2023) :octocat: :book:
- CaT: Balanced Continual Graph Learning with Graph Condensation (Liu Yilun et al., ICDM 2023) :octocat: :book:
- Mirage: Model-Agnostic Graph Distillation for Graph Classification (Mridul Gupta & Sahil Manchanda et al., ICLR 2024) :octocat: :book:
- Graph Distillation with Eigenbasis Matching (Yang Liu & Deyu Bo et al., ICML 2024) :octocat: :book:
- Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching (Yuchen Zhang & Tianle Zhang & Kai Wang et al., ICML 2024) :octocat: :book:
- Graph Data Condensation via Self-expressive Graph Structure Reconstruction (Zhanyu Liu & Chaolv Zeng et al., KDD 2024) :octocat: :book:
- Two Trades is not Baffled: Condensing Graph via Crafting Rational Gradient Matching (Tianle Zhang & Yuchen Zhang & Kai Wang et al., 2024) :octocat: :book:
Survey
- A Comprehensive Survey on Graph Reduction: Sparsification, Coarsening, and Condensation (Mohammad Hashemi et al., IJCAI 2024) :octocat: :book:
- Graph Condensation: A Survey (Xinyi Gao et al., 2024) :octocat: :book:
- A Survey on Graph Condensation (Hongjia Xu et al., 2024) :octocat: :book:
Benchmark
- GC-Bench: An Open and Unified Benchmark for Graph Condensation (Qingyun Sun & Ziying Chen et al., NeurIPS 2024) :octocat: :book:
- GCondenser: Benchmarking Graph Condensation (Yilun Liu et al., 2024) :octocat: :book:
- GC-Bench: A Benchmark Framework for Graph Condensation with New Insights (Shengbo Gong & Juntong Ni et al., 2024) :octocat: :book:
No further updates will be made regarding graph distillation topics as sufficient papers and summary projects are already available on the subject
<a name="nas" />Neural Architecture Search
- Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data (Felipe Petroski Such et al., ICML 2020) :octocat: :book:
- Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation (Dmitry Medvedev et al., AIST 2021) :octocat: :book:
- Calibrated Dataset Condensation for Faster Hyperparameter Search (Mucong Ding et al., 2024) :book:
Fashion, Art, and Design
- Wearable ImageNet: Synthesizing Tileable Textures via Dataset Distillation (George Cazenavette et al., CVPR 2022 Workshop) :globe_with_meridians: :octocat: :book:
- Learning from Designers: Fashion Compatibility Analysis Via Dataset Distillation (Yulan Chen et al., ICIP 2022) :book:
- Galaxy Dataset Distillation with Self-Adaptive Trajectory Matching (Haowen Guan et al., NeurIPS 2023 Workshop) :octocat: :book:
Recommender Systems
- Infinite Recommendation Networks: A Data-Centric Approach (Noveen Sachdeva et al., NeurIPS 2022) :octocat: :book:
- Gradient Matching for Categorical Data Distillation in CTR Prediction (Chen Wang et al., RecSys 2023) :book:
Blackbox Optimization
- Bidirectional Learning for Offline Infinite-width Model-based Optimization (Can Chen et al., NeurIPS 2022) :octocat: :book:
- Bidirectional Learning for Offline Model-based Biological Sequence Design (Can Chen et al., ICML 2023) :octocat: :book:
Robustness
- Can We Achieve Robustness from Data Alone? (Nikolaos Tsilivis et al., ICML 2022 Workshop) :book:
- Towards Robust Dataset Learning (Yihan Wu et al., 2022) :book:
- Rethinking Data Distillation: Do Not Overlook Calibration (Dongyao Zhu et al., ICCV 2023) :book:
- Towards Trustworthy Dataset Distillation (Shijie Ma et al., PR 2024) :octocat: :book:
- Group Distributionally Robust Dataset Distillation with Risk Minimization (Saeed Vahidian & Mingyu Wang & Jianyang Gu et al., 2024) :octocat: :book:
- Towards Adversarially Robust Dataset Distillation by Curvature Regularization (Eric Xue et al., 2024) :book:
Fairness
- Fair Graph Distillation (Qizhang Feng et al., NeurIPS 2023) :book:
- FairDD: Fair Dataset Distillation via Synchronized Matching (Qihang Zhou et al., 2024) :book:
Text
- Data Distillation for Text Classification (Yongqi Li et al., 2021) :book:
- Dataset Distillation with Attention Labels for Fine-tuning BERT (Aru Maekawa et al., ACL 2023) :octocat: :book:
- DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation (Aru Maekawa et al., NAACL 2024) :octocat: :book:
- Textual Dataset Distillation via Language Model Embedding (Yefan Tao et al., EMNLP 2024) :book:
Tabular
- New Properties of the Data Distillation Method When Working With Tabular Data (Dmitry Medvedev et al., AIST 2020) :octocat: :book:
Retrieval
- Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching (Tao Feng & Jie Zhang et al., 2023) :book:
Video
- Dancing with Still Images: Video Distillation via Static-Dynamic Disentanglement (Ziyu Wang & Yue Xu et al., CVPR 2024) :octocat: :book:
- Video Set Distillation: Information Diversification and Temporal Densifica (Yinjie Zhao et al., 2024) :book:
Domain Adaptation
- Multi-Source Domain Adaptation Meets Dataset Distillation through Dataset Dictionary Learning (Eduardo Montesuma et al., ICASSP 2024) :book:
Super Resolution
- GSDD: Generative Space Dataset Distillation for Image Super-resolution (Haiyu Zhang et al., AAAI 2024) :book:
Time Series
- Dataset Condensation for Time Series Classification via Dual Domain Matching (Zhanyu Liu et al., KDD 2024) :octocat: :book:
- CondTSF: One-line Plugin of Dataset Condensation for Time Series Forecasting (Jianrong Ding & Zhanyu Liu et al., NeurIPS 2024) :octocat: :book:
- Less is More: Efficient Time Series Dataset Condensation via Two-fold Modal Matching (Hao Miao et al., VLDB 2025) :octocat: :book:
Speech
- Dataset-Distillation Generative Model for Speech Emotion Recognition (Fabian Ritter-Gutierrez et al., Interspeech 2024) :book:
Machine Unlearning
- Distilled Datamodel with Reverse Gradient Matching (Jingwen Ye et al., CVPR 2024) :book:
- Dataset Condensation Driven Machine Unlearning (Junaid Iqbal Khan, 2024) :octocat: :book:
Reinforcement Learning
- Dataset Distillation for Offline Reinforcement Learning (Jonathan Light & Yuanzhe Liu et al., ICML 2024 Workshop) :globe_with_meridians: :octocat: :book:
Long-Tail
- Distilling Long-tailed Datasets (Zhenghao Zhao & Haoxuan Wang et al., 2024) :book:
Learning with Noisy Labels
- Dataset Distillers Are Good Label Denoisers In the Wild (Lechao Cheng et al., 2024) :octocat: :book:
Object Detection
- Fetch and Forge: Efficient Dataset Condensation for Object Detection (Ding Qi et al., NeurIPS 2024) :book:
Media Coverage
- Beginning of Awesome Dataset Distillation
- Most Popular AI Research Aug 2022
- 一个项目帮你了解数据集蒸馏Dataset Distillation
- 浓缩就是精华:用大一统视角看待数据集蒸馏
Star History
Citing Awesome Dataset Distillation
If you find this project useful for your research, please use the following BibTeX entry.
@misc{li2022awesome,
author={Li, Guang and Zhao, Bo and Wang, Tongzhou},
title={Awesome Dataset Distillation},
howpublished={\url{https://github.com/Guang000/Awesome-Dataset-Distillation}},
year={2022}
}
Acknowledgments
We would like to express our heartfelt thanks to Nikolaos Tsilivis, Wei Jin, Yongchao Zhou, Noveen Sachdeva, Can Chen, Guangxiang Zhao, Shiye Lei, Xinchao Wang, Dmitry Medvedev, Seungjae Shin, Jiawei Du, Yidi Jiang, Xindi Wu, Guangyi Liu, Yilun Liu, Kai Wang, Yue Xu, Anjia Cao, Jianyang Gu, Yuanzhen Feng, Peng Sun, Ahmad Sajedi, Zhihao Sui, Ziyu Wang, Haoyang Liu, Eduardo Montesuma, Shengbo Gong, Zheng Zhou, Zhenghao Zhao, Duo Su, Tianhang Zheng, Shijie Ma, Wei Wei, Yantai Yang, Shaobo Wang, Xinhao Zhong, Zhiqiang Shen, Cong Cong, Chun-Yin Huang, Dai Liu, Ruonan Yu, William Holland, and Saksham Singh Kushwaha for their valuable suggestions and contributions.
The Homepage of Awesome Dataset Distillation was designed and maintained by Longzhen Li.