Home

Awesome

SPEED

The official PyTorch implementation of our NeurIPS 2023 paper:

Sparse Parameterization for Epitomic Dataset Distillation

Xing Wei, Anjia Cao, Funing Yang, and Zhiheng Ma.

GitHub maintainer: Anjia Cao

Highlight

<div align=center><img src="figure/overview.jpg" width="90%" height="90%"></div>

:bookmark:Brief Introduction

The success of deep learning relies heavily on large and diverse datasets, but the storage, preprocessing, and training of such data present significant challenges. To address these challenges, dataset distillation techniques have been proposed to obtain smaller synthetic datasets that capture the essential information of the originals. In this paper, we introduce a Sparse Parameterization for Epitomic datasEt Distillation (SPEED) framework, which leverages the concept of dictionary learning and sparse coding to distill epitomes that represent pivotal information of the dataset. SPEED prioritizes proper parameterization of the synthetic dataset and introduces techniques to capture spatial redundancy within and between synthetic images. We propose Spatial-Agnostic Epitomic Tokens (SAETs) and Sparse Coding Matrices (SCMs) to efficiently represent and select significant features. Additionally, we build a Feature-Recurrent Network (FReeNet) to generate hierarchical features with high compression and storage efficiency. Experimental results demonstrate the superiority of SPEED in handling high-resolution datasets, achieving state-of-the-art performance on multiple benchmarks and downstream applications. Our framework is compatible with a variety of dataset matching approaches, generally enhancing their performance. This work highlights the importance of proper parameterization in epitomic dataset distillation and opens avenues for efficient representation learning.

:bookmark:Distilled Images

<img src="figure/samples.jpg" width="70%" height="70%">

:bookmark:Strong Performance

IPC11050
CIFAR1063.2 $\pm$ 0.173.5 $\pm$ 0.277.7 $\pm$ 0.4
CIFAR10040.0 $\pm$ 0.445.9 $\pm$ 0.349.1 $\pm$ 0.2
TinyImageNet26.9 $\pm$ 0.328.8 $\pm$ 0.230.1 $\pm$ 0.3
IPC110
ImageNette66.9 $\pm$ 0.772.9 $\pm$ 1.5
ImageWoof38.0 $\pm$ 0.944.1 $\pm$ 1.4
ImageFruit43.4 $\pm$ 0.650.0 $\pm$ 0.8
ImageMeow43.6 $\pm$ 0.752.0 $\pm$ 1.3
ImageSquawk60.9 $\pm$ 1.071.8 $\pm$ 1.3
ImageYellow62.6 $\pm$ 1.370.5 $\pm$ 1.5

Install the environment

git clone https://github.com/MIV-XJTU/SPEED.git
cd SPEED
conda create -n speed python==3.8
conda activate speed
pip install -r requirements.txt

Distilling

python buffer.py --dataset {CIFAR10/CIFAR100} --model ConvNet --train_epochs 50 --num_experts 100 --zca
python distill.py --dataset {CIFAR10/CIFAR100} --model ConvNet --zca
python buffer.py --dataset Tiny --model ConvNetD4 --train_epochs 50 --num_experts 100
python distill.py --dataset Tiny --model ConvNetD4
python buffer.py --dataset ImageNet --subset imagenette --model ConvNetD5 --train_epochs 50 --num_experts 100
python distill.py --dataset ImageNet --subset imagenette --model ConvNetD5

More hyperparameter settings are concluded at the end of networks.py. After distillation, you will obtain four components: saet, scm, freenet, and syn_lr.

Evaluation

python eval.py --dataset {CIFAR10/CIFAR100} --model ConvNet --zca
python eval.py --dataset Tiny --model ConvNetD4
python eval.py --dataset ImageNet --subset imagenette --model ConvNetD5

Before evaluation, please specify the distilled components' paths: --saet_path, --scm_path, --freenet_path, --syn_lr_path.

Acknowledgement

Our work is implemented base on the following projects. We really appreciate their excellent open-source works!

Citation

If any parts of our paper and code help your research, please consider citing us and giving a star to our repository.

@inproceedings{wei2023sparse,
    title={Sparse Parameterization for Epitomic Dataset Distillation},
    author={Wei, Xing and Cao, Anjia and Yang, Funing and Ma, Zhiheng},
    booktitle={NeurIPS},
    year={2023}
}