Home

Awesome

ESNAC: Embedding Space for Neural Architecture Compression

This is the PyTorch implementation of our paper:

Learnable Embedding Space for Efficient Neural Architecture Compression.<br/>Shengcao Cao*, Xiaofang Wang*, and Kris M. Kitani. ICLR 2019. [OpenReview] [arXiv].

Requirements

We recommend you to use this repository with Anaconda Python 3.7 and the following libraries:

Usage

Random Seed and Reproducibility

To ensure reproducibility, we provide the compression results on CIFAR-100 with random seed 127. This seed value is randomly picked. You can try other seed values or comment out the call of seed_everything() in compression.py to obtain different results. Here are the compression results on CIFAR-100 when fixing the seed value to 127:

TeacherAccuracy#ParamsRatioTimesf(x)
VGG-1971.64%3.07M0.84706.54×0.9492
ResNet-1871.91%1.26M0.88768.90×0.9024
ResNet-3475.47%2.85M0.86647.48×0.9417
ShuffleNet68.17%0.18M0.82985.88×0.9305

Citation

If you find our work useful in your research, please consider citing our paper Learnable Embedding Space for Efficient Neural Architecture Compression:

@inproceedings{
  cao2018learnable,
  title={Learnable Embedding Space for Efficient Neural Architecture Compression},
  author={Shengcao Cao and Xiaofang Wang and Kris M. Kitani},
  booktitle={International Conference on Learning Representations},
  year={2019},
  url={https://openreview.net/forum?id=S1xLN3C9YX},
}