Awesome
PGMPF
Prior Gradient Mask Guided Pruning-Aware Fine-Tuning
This repository is the PyTorch implementation of [Prior Gradient Mask Guided Pruning-Aware Fine-Tuning](No Link yet) at AAAI2022.
ImageNet Experiments
Prune pre-trained resnet34 model.
Arguments
lr
- The learning lr is set to be 0.1 by default, linearly scaled with batchsize.rate
- the compression rate per layer, equivelant to1 - pruning rate
.b
- batch size,batchsize=768 = 3 * 256
split among3
GPUs.cos
- use cosine annealing learning rate strategy or not. We do not use it in the paper.cos=0
.
python pruning_train_gd_prune_bn.py -a resnet34 \
--save_dir ./logs/resnet34-rate-0.6 --rate 0.6 --layer_begin 0 --layer_end 105 --layer_inter 3 \
--use_pretrain --lr 0.02 --epochs 100 --cos 0 -b 768
Prune pre-trained resnet50 model. batchsize=192 = 3 * 64
split among 3
GPUs.
python pruning_train_gd_prune_bn.py -a resnet50 \
--save_dir ./logs/resnet50-rate-0.6 --rate 0.6 --layer_begin 0 --layer_end 156 --layer_inter 3 \
--use_pretrain --lr 0.01 --epochs 100 --cos 0 -b 192
How to convert the pruned model into small ones
In accordance with the implementation of Soft Filter Pruning,
sh scripts/get_small.sh
can be used to convert the pruned model of resnet-18/34/50 into small ones.
The convertion of each model requires case-by-case processing of the Batch Normalization Layers and Downsampling layers.
Note that we had fixed some errors of the original implementation utils/get_small_model.py
for resnet18/34 in Soft Filter Pruning caused by the Downsampling layer.
Besides, in utils/get_small_model.py
, we provide the code for testing the acutal running time of the small model on GPU/CPU.