Home

Awesome

<center> <img src="imgs/banner1.png"></center>

License DOI

NOTE This version is a minimal upgrade to PyTorch 2.3 that works for one specific use case. The TensorBoard logger backend was replaced to eliminate TensorFlow dependencies.

<div align="center"> <h3> <a href="https://github.com/IntelLabs/distiller/wiki"> Wiki and tutorials </a> <span> | </span> <a href="https://intellabs.github.io/distiller/index.html"> Documentation </a> <span> | </span> <a href="#getting-started"> Getting Started </a> <span> | </span> <a href="https://intellabs.github.io/distiller/algo_pruning.html"> Algorithms </a> <span> | </span> <a href="https://intellabs.github.io/distiller/design.html"> Design </a> <span> | </span> <a href="https://github.com/IntelLabs/distiller/wiki/Frequently-Asked-Questions-(FAQ)"> FAQ </a> </h3> </div>

Distiller is an open-source Python package for neural network compression research.

Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.

Table of Contents

Highlighted features

Installation

These instructions will help get Distiller up and running on your local machine.

<details><summary><b>1. Clone Distiller</b></summary> <p>

Clone the Distiller code repository from github:

$ git clone https://github.com/IntelLabs/distiller.git

The rest of the documentation that follows, assumes that you have cloned your repository to a directory called distiller. <br>

</p> </details> <details><summary><b>2. Create a Python virtual environment</b></summary> <p>

We recommend using a Python virtual environment, but that of course, is up to you. There's nothing special about using Distiller in a virtual environment, but we provide some instructions, for completeness.<br> Before creating the virtual environment, make sure you are located in directory distiller. After creating the environment, you should see a directory called distiller/env. <br>

Using virtualenv

If you don't have virtualenv installed, you can find the installation instructions here.

To create the environment, execute:

$ python3 -m virtualenv env

This creates a subdirectory named env where the python virtual environment is stored, and configures the current shell to use it as the default python environment.

Using venv

If you prefer to use venv, then begin by installing it:

$ sudo apt-get install python3-venv

Then create the environment:

$ python3 -m venv env

As with virtualenv, this creates a directory called distiller/env.<br>

Activate the environment

The environment activation and deactivation commands for venv and virtualenv are the same.<br> !NOTE: Make sure to activate the environment, before proceeding with the installation of the dependency packages:<br>

$ source env/bin/activate
</p> </details> <details><summary><b>3. Install the Distiller package</b></summary> <p>

Finally, install the Distiller package and its dependencies using pip3:

$ cd distiller
$ pip3 install -e .

This installs Distiller in "development mode", meaning any changes made in the code are reflected in the environment without re-running the install command (so no need to re-install after pulling changes from the Git repository).

Notes:

</p> </details>

Required PyTorch Version

Distiller is tested using the default installation of PyTorch 1.3.1, which uses CUDA 10.1. We use TorchVision version 0.4.2. These are included in Distiller's requirements.txt and will be automatically installed when installing the Distiller package as listed above.

If you do not use CUDA 10.1 in your environment, please refer to PyTorch website to install the compatible build of PyTorch 1.3.1 and torchvision 0.4.2.

Getting Started

Distiller comes with sample applications and tutorials covering a range of model types:

Model TypeSparsityPost-training quantizationQuantization-aware trainingAuto Compression (AMC)Knowledge Distillation
Image classification:white_check_mark::white_check_mark::white_check_mark::white_check_mark::white_check_mark:
Word-level language model:white_check_mark::white_check_mark:
Translation (GNMT):white_check_mark:
Recommendation System (NCF):white_check_mark:
Object Detection:white_check_mark:

Head to the examples directory for more details.

Other resources to refer to, beyond the examples:

Basic Usage Examples

The following are simple examples using Distiller's image classifcation sample, showing some of Distiller's capabilities.

<details><summary><b>Example: Simple training-only session (no compression)</b></summary> <p>

The following will invoke training-only (no compression) of a network named 'simplenet' on the CIFAR10 dataset. This is roughly based on TorchVision's sample Imagenet training application, so it should look familiar if you've used that application. In this example we don't invoke any compression mechanisms: we just train because for fine-tuning after pruning, training is an essential part.<br>
Note that the first time you execute this command, the CIFAR10 code will be downloaded to your machine, which may take a bit of time - please let the download process proceed to completion.

The path to the CIFAR10 dataset is arbitrary, but in our examples we place the datasets in the same directory level as distiller (i.e. ../../../data.cifar10).

First, change to the sample directory, then invoke the application:

$ cd distiller/examples/classifier_compression
$ python3 compress_classifier.py --arch simplenet_cifar ../../../data.cifar10 -p 30 -j=1 --lr=0.01

You can use a TensorBoard backend to view the training progress (in the diagram below we show a couple of training sessions with different LR values). For compression sessions, we've added tracing of activation and parameter sparsity levels, and regularization loss.

<center> <img src="imgs/simplenet_training.png"></center> </p> </details> <details><summary><b>Example: Getting parameter statistics of a sparsified model</b></summary> <p>

We've included in the git repository a few checkpoints of a ResNet20 model that we've trained with 32-bit floats. Let's load the checkpoint of a model that we've trained with channel-wise Group Lasso regularization.<br> With the following command-line arguments, the sample application loads the model (--resume) and prints statistics about the model weights (--summary=sparsity). This is useful if you want to load a previously pruned model, to examine the weights sparsity statistics, for example. Note that when you resume a stored checkpoint, you still need to tell the application which network architecture the checkpoint uses (-a=resnet20_cifar):

$ python3 compress_classifier.py --resume=../ssl/checkpoints/checkpoint_trained_ch_regularized_dense.pth.tar -a=resnet20_cifar ../../../data.cifar10 --summary=sparsity
<center> <img src="imgs/ch_sparsity_stats.png"></center>

You should see a text table detailing the various sparsities of the parameter tensors. The first column is the parameter name, followed by its shape, the number of non-zero elements (NNZ) in the dense model, and in the sparse model. The next set of columns show the column-wise, row-wise, channel-wise, kernel-wise, filter-wise and element-wise sparsities. <br> Wrapping it up are the standard-deviation, mean, and mean of absolute values of the elements.

In the Compression Insights notebook we use matplotlib to plot a bar chart of this summary, that indeed show non-impressive footprint compression.

<center> <img src="imgs/ch_sparsity_stats_barchart.png"></center>

Although the memory footprint compression is very low, this model actually saves 26.6% of the MACs compute.

$ python3 compress_classifier.py --resume=../ssl/checkpoints/checkpoint_trained_channel_regularized_resnet20_finetuned.pth.tar -a=resnet20_cifar ../../../data.cifar10 --summary=compute
<center> <img src="imgs/ch_compute_stats.png"></center> </p> </details> <details><summary><b>Example: Post-training quantization</b></summary> <p>

This example performs 8-bit quantization of ResNet20 for CIFAR10. We've included in the git repository the checkpoint of a ResNet20 model that we've trained with 32-bit floats, so we'll take this model and quantize it:

$ python3 compress_classifier.py -a resnet20_cifar ../../../data.cifar10 --resume ../ssl/checkpoints/checkpoint_trained_dense.pth.tar --quantize-eval --evaluate

The command-line above will save a checkpoint named quantized_checkpoint.pth.tar containing the quantized model parameters. See more examples here.

</p> </details>

Explore the sample Jupyter notebooks

The set of notebooks that come with Distiller is described here, which also explains the steps to install the Jupyter notebook server.<br> After installing and running the server, take a look at the notebook covering pruning sensitivity analysis.

Sensitivity analysis is a long process and this notebook loads CSV files that are the output of several sessions of sensitivity analysis.

<center> <img src="imgs/resnet18-sensitivity.png"></center>

Running the tests

We are currently light-weight on test and this is an area where contributions will be much appreciated.<br> There are two types of tests: system tests and unit-tests. To invoke the unit tests:

$ cd distiller/tests
$ pytest

We use CIFAR10 for the system tests, because its size makes for quicker tests. To invoke the system tests, you need to provide a path to the CIFAR10 dataset which you've already downloaded. Alternatively, you may invoke full_flow_tests.py without specifying the location of the CIFAR10 dataset and let the test download the dataset (for the first invocation only). Note that --cifar1o-path defaults to the current directory. <br> The system tests are not short, and are even longer if the test needs to download the dataset.

$ cd distiller/tests
$ python full_flow_tests.py --cifar10-path=<some_path>

The script exits with status 0 if all tests are successful, or status 1 otherwise.

Generating the HTML documentation site

Install mkdocs and the required packages by executing:

$ pip3 install -r doc-requirements.txt

To build the project documentation run:

$ cd distiller/docs-src
$ mkdocs build --clean

This will create a folder named 'site' which contains the documentation website. Open distiller/docs/site/index.html to view the documentation home page.

Versioning

We use SemVer for versioning. For the versions available, see the tags on this repository.

License

This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details

Community

<details><summary><b>Github projects using Distiller</b></summary> <p> </p> </details> <details><summary><b>Research papers citing Distiller</b></summary> <p> </p> </details>

If you used Distiller for your work, please use the following citation:

@article{nzmora2019distiller,
  author       = {Neta Zmora and
                  Guy Jacob and
                  Lev Zlotnik and
                  Bar Elharar and
                  Gal Novik},
  title        = {Neural Network Distiller: A Python Package For DNN Compression Research},
  month        = {October},
  year         = {2019},
  url          = {https://arxiv.org/abs/1910.12232}
}

Acknowledgments

Any published work is built on top of the work of many other people, and the credit belongs to too many people to list here.

Built With

Disclaimer

Distiller is released as a reference code for research purposes. It is not an official Intel product, and the level of quality and support may not be as expected from an official product. Additional algorithms and features are planned to be added to the library. Feedback and contributions from the open source and research communities are more than welcome.