Awesome
torchprune
Main contributors of this code base: Lucas Liebenwein, Cenk Baykal.
Please check individual paper folders for authors of each paper.
<p align="center"> <img src="./misc/imgs/pruning_pipeline.png" width="100%"> </p>Papers
This repository contains code to reproduce the results from the following papers:
Paper | Venue | Title & Link |
---|---|---|
Node | NeurIPS 2021 | Sparse Flows: Pruning Continuous-depth Models |
ALDS | NeurIPS 2021 | Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition |
Lost | MLSys 2021 | Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy |
PFP | ICLR 2020 | Provable Filter Pruning for Efficient Neural Networks |
SiPP | SIAM 2022 | SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks |
Packages
In addition, the repo also contains two stand-alone python packages that can be used for any desired pruning experiment:
Packages | Location | Description |
---|---|---|
torchprune | ./src/torchprune | This package can be used to run any of the implemented pruning algorithms. It also contains utilities to use pre-defined networks (or use your own network) and utilities for standard datasets. |
experiment | ./src/experiment | This package can be used to run pruning experiments and compare multiple pruning methods for different prune ratios. Each experiment is configured using a .yaml -configuration files. |
Paper Reproducibility
The code for each paper is implemented in the respective packages. In addition, for each paper we have a separate folder that contains additional information about the paper and scripts and parameter configuration to reproduce the exact results from the paper.
Paper | Location |
---|---|
Node | paper/node |
ALDS | paper/alds |
Lost | paper/lost |
PFP | paper/pfp |
SiPP | paper/sipp |
Setup
We provide three ways to install the codebase:
1. Github Repo
Clone the github repo:
git pull git@github.com:lucaslie/torchprune.git
# (or your favorite way to pull a repo)
We recommend installing the packages in a separate conda environment. Then to create a new conda environment run
conda create -n prune python=3.8 pip
conda activate prune
To install all required dependencies and both packages, run:
pip install -r misc/requirements.txt
Note that this will also install pre-commit hooks for clean commits :-)
2. Pip Installation
To separately install each package with minimal dependencies without cloning the repo manually, run the following commands:
# "torchprune" package
pip install git+https://github.com/lucaslie/torchprune/#subdirectory=src/torchprune
# "experiment" package
pip install git+https://github.com/lucaslie/torchprune/#subdirectory=src/experiment
Note that the experiment package does not automatically install the torchprune package.
3. Docker Image
You can simply pull the docker image from our docker hub:
docker pull liebenwein/torchprune
You can run it interactively with
docker run -it liebenwein/torchprune bash
For your reference you can find the Dockerfile here.
More Information and Usage
Check out the following README
s in the sub-directories to find out more about
using the codebase.
READMEs | More Information |
---|---|
src/torchprune/README.md | more details to prune neural networks, how to use and setup the data sets, how to implement custom pruning methods, and how to add your data sets and networks. |
src/experiment/README.md | more details on how to configure and run your own experiments, and more information on how to re-produce the results. |
paper/node/README.md | check out for more information on the Node paper. |
paper/alds/README.md | check out for more information on the ALDS paper. |
paper/lost/README.md | check out for more information on the Lost paper. |
paper/pfp/README.md | check out for more information on the PFP paper. |
paper/sipp/README.md | check out for more information on the SiPP paper. |
Citations
Please cite the respective papers when using our work.
Sparse flows: Pruning continuous-depth models
@article{liebenwein2021sparse,
title={Sparse flows: Pruning continuous-depth models},
author={Liebenwein, Lucas and Hasani, Ramin and Amini, Alexander and Rus, Daniela},
journal={Advances in Neural Information Processing Systems},
volume={34},
pages={22628--22642},
year={2021}
}
Towards Determining the Optimal Layer-wise Decomposition
@inproceedings{liebenwein2021alds,
author = {Lucas Liebenwein and Alaa Maalouf and Dan Feldman and Daniela Rus},
booktitle = {Advances in Neural Information Processing Systems},
title = {Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition},
url = {https://arxiv.org/abs/2107.11442},
volume = {34},
year = {2021}
}
Lost In Pruning
@article{liebenwein2021lost,
title={Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy},
author={Liebenwein, Lucas and Baykal, Cenk and Carter, Brandon and Gifford, David and Rus, Daniela},
journal={Proceedings of Machine Learning and Systems},
volume={3},
year={2021}
}
Provable Filter Pruning
@inproceedings{liebenwein2020provable,
title={Provable Filter Pruning for Efficient Neural Networks},
author={Lucas Liebenwein and Cenk Baykal and Harry Lang and Dan Feldman and Daniela Rus},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=BJxkOlSYDH}
}
SiPPing Neural Networks (Weight Pruning)
@article{baykal2022sensitivity,
title={Sensitivity-informed provable pruning of neural networks},
author={Baykal, Cenk and Liebenwein, Lucas and Gilitschenski, Igor and Feldman, Dan and Rus, Daniela},
journal={SIAM Journal on Mathematics of Data Science},
volume={4},
number={1},
pages={26--45},
year={2022},
publisher={SIAM}
}