Home

Awesome

LAASP: Loss-Aware Automatic Selection of Filter Pruning Criteria for Deep Neural Network Acceleration

alt text

Deepak Ghimire, Kilho Lee, and Seong-heum Kim, “Loss-aware automatic selection of structured pruning criteria for deep neural network acceleration,” Image and Vision Computing, vol. 136, p. 104745, 2023.

Abstract: Structured pruning is a well-established technique for compressing neural networks, making them suitable for deployment in resource-limited edge devices. This study presents an efficient loss-aware automatic selection of structured pruning (LAASP) criteria for slimming and accelerating deep neural networks. The majority of pruning methods employ a sequential process consisting of three stages, 1) training, 2) pruning, and 3) fine-tuning, whereas the proposed pruning technique adopts a pruning-while-training approach that eliminates the first stage and integrates the second and third stages into a single cycle. The automatic selection of magnitude or similarity-based filter pruning criteria from a specified pool of criteria and the specific pruning layer at each pruning iteration is guided by the network's overall loss on a small subset of training data. To mitigate the abrupt accuracy drop due to pruning, the network is retrained briefly after each reduction of a predefined number of floating-point operations (FLOPs). The optimal pruning rates for each layer in the network are automatically determined, eliminating the need for manual allocation of fixed or variable pruning rates for each layer. Experiments on the VGGNet, ResNet, and MobileNet models on the CIFAR-10 and ImageNet benchmark datasets demonstrate the effectiveness of the proposed method. In particular, the ResNet56 and ResNet110 models on the CIFAR-10 dataset significantly improve the top-1 accuracy compared to state-of-the-art methods while reducing the network FLOPs by 52%. Furthermore, pruning the ResNet50 model on the ImageNet dataset reduces FLOPs by more than 42% with a negligible 0.33% drop in the top-5 accuracy.

@article{GHIMIRE2023104745,
    title = {Loss-aware automatic selection of structured pruning criteria for deep neural network acceleration},
    author = {Deepak Ghimire and Kilho Lee and Seong-heum Kim},
    journal = {Image and Vision Computing},
    volume = {136},
    pages = {104745},
    year = {2023},
    issn = {0262-8856},
    doi = {https://doi.org/10.1016/j.imavis.2023.104745},
    url = {https://www.sciencedirect.com/science/article/pii/S0262885623001191},
    keywords = {Deep neural networks, Structured pruning, Pruning criteria},
}

Note: Filter removal is based on VainF/Torch-Pruning.

alt text

Table of Contents

Requirements

Models

ModelDatasetBaseline Top@1 Acc. (%)Pruned Top@1 Acc. (%)Top@1 Acc. Drop (%)Pruned Top@5 Acc. (%)FLOPs RR (%)
Vgg16CIFAR1093.79 ± 0.2393.90 ± 0.16-0.11-34.6
Vgg16CIFAR1093.79 ± 0.2393.79 ± 0.110.00-60.5
ResNet32CIFAR1093.12 ± 0.0492.64 ± 0.090.48-53.3
ResNet56CIFAR1093.61 ± 0.1193.04 ± 0.080.12-52.6
ResNet110CIFAR1094.41 ± 0.0794.17 ± 0.160.24-52.5
ResNet110CIFAR1094.41 ± 0.0793.58 ± 0.210.83-58.5
ResNet18ImageNet70.5868.661.9288.5042.2
ResNet18ImageNet70.5868.122.4688.0745.4
ResNet34ImageNet73.9072.651.2590.9841.4
ResNet34ImageNet73.9072.371.5390.8045.4
ResNet50ImageNet76.4875.850.6392.8142.3
ResNet50ImageNet76.4875.441.0492.5953.9
MobileNetV2ImageNet71.7971.000.7989.8630.0
MobileNetV2ImageNet71.7968.453.3488.4054.5

VGGNet on CIFAR-10

Training-Pruning

sh .\scripts\vgg16_cifar10\run_vgg16_pruning.sh

Evaluation

sh .\scripts\vgg16_cifar10\run_vgg16_eval.sh

ResNet on CIFAR-10

Training-Pruning

sh .\scripts\resnet_cifar10\run_resnet_cifar10_pruning.sh

Evaluation

sh .\scripts\resnet_cifar10\run_resnet_cifar10_eval.sh  

ResNet on ImageNet

Prepare ImageNet dataset

Training-Pruning

sh .\scripts\resnet_imagenet\run_resnet_imagenet_pruning.sh

Evaluation

sh .\scripts\resnet_imagenet\run_resnet_imagenet_eval.sh

Visitors