Home

Awesome

Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks

Official Pytorch implementation of our ICCV'23 paper Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks (Kaixin Xu*, Zhe Wang*, Xue Geng, Jie Lin, Min Wu, Xiaoli Li, Weisi Lin).

Installation

  1. Clone this repository
git clone https://github.com/Akimoto-Cris/RD_PRUNE.git
cd RD_PRUNE
  1. Install NVIDIA-DALI for imagenet experiments:

Experiments

Dataset Preparation

Set the desired dataset location in the tools/dataloaders.py (L9-10):

data_route = {'cifar': '/path/to/cifar',
              'imagenet': '/path/to/imagenet'}

Iterative Pruning using calibration set

python iterate.py --dataset cifar --model resnet32_cifar --pruner rd --worst_case_curve --calib_size 1024
python iterate.py --dataset imagenet --model resnet50 --pruner rd --worst_case_curve --calib_size 256

Zero-shot Iterative Pruning using random synthetic data

python iterate.py --dataset imagenet --model resnet50 --pruner rd --worst_case_curve --calib_size 256 --synth_data

One-shot Iterative Pruning

This can be performed simply by modifying the tools/modelloaders.py and amount-per-iteration configuration. E.g. For One-shot pruning of ResNet-50 on Imagenet at 50% sparsity, change the L101 of tools/modelloaders.py as:

        amount = 0.5

then run the following script:

python iterate.py --dataset imagenet --model resnet50 --pruner rd --worst_case_curve --calib_size 256 --iter_end 1

Others

We've also left the baseline methods in the tools/pruners.py, which means you can run baseline methods by setting the flag --pruner of the above scripts to the corresponding methods (E.g. lamp/glob/unif/unifplus/erk).

Results

ModelDatasetSparsity (%)FLOPs Remained (%)Top-1DenseTop-1 diff
ResNet-32CIFAR-1095.53.5990.83 ± 0.2493.99-3.16
VGG-16CIFAR-1098.853.4392.14 ± 0.1891.71+0.43
DenseNet-121CIFAR-1098.852.0287.7 ± 0.2491.14-3.44
EfficientNet-B0CIFAR-1098.854.5885.63 ± 0.3187.95-2.32
VGG-16-BNImageNet89.317.7168.8873.37-4.49
ResNet-50ImageNet4153.575.9076.14-0.24
ModelDatasetSparsity (%)FLOPs Remained (%)Top-1DenseTop-1 diff
ResNet-50ImageNet5834.575.5976.14-0.55
ModelDatasetSparsity (%)FLOPs Remained (%)Top-1DenseTop-1 diff
ResNet-50ImageNet5042.4875.1376.14-1.01

Acknowledgement

This implementation is built on top of ICLR'21 LAMP. We thank authors for the awesome repo.

Citation

We appreciate it if you would please cite the following paper if you found the implementation useful for your work:

@InProceedings{Xu_2023_ICCV,
    author    = {Xu, Kaixin and Wang, Zhe and Geng, Xue and Wu, Min and Li, Xiaoli and Lin, Weisi},
    title     = {Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {17447-17457}
}