Home

Awesome

☕ CoFiPruning: Structured Pruning Learns Compact and Accurate Models

This repository contains the code and pruned models for our ACL'22 paper Structured Pruning Learns Compact and Accurate Models. Our talk slides can be found here. Numerical results in the paper can be found here.

**************************** Updates ****************************

Quick Links

Overview

We propose CoFiPruning, a task-specific, structured pruning approach (Coarse and Fine-grained Pruning) and show that structured pruning can achieve highly compact subnetworks and obtain large speedups and competitive accuracy as distillation approaches, while requiring much less computation. Our key insight is to jointly prune coarse-grained units (e.g., self-attention or feed-forward layers) and fine-grained units (e.g., heads, hidden dimensions) simultaneously. Different from existing works, our approach controls the pruning decision of every single parameter by multiple masks of different granularity. This is the key to large compression, as it allows the greatest flexibility of pruned structures and eases the optimization compared to only pruning small units. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization.

Main Results

We show the main results of CoFiPruning along with results of popular pruning and distillation methods including Block Pruning, DynaBERT, DistilBERT and TinyBERT. Please see more detailed results in our paper.

Model List

Our released models are listed as following. You can download these models with the following links. We use a batch size of 128 and V100 32GB GPUs for speedup evaluation. We show F1 score for SQuAD and accuracy score for GLUE datasets. s60 denotes that the sparsity of the model is roughly 60%.

model nametasksparsityspeedupscore
princeton-nlp/CoFi-MNLI-s60MNLI60.2%2.1 ×85.3
princeton-nlp/CoFi-MNLI-s95MNLI94.3%12.1 ×80.6
princeton-nlp/CoFi-QNLI-s60QNLI60.3%2.1 ×91.8
princeton-nlp/CoFi-QNLI-s95QNLI94.5%12.1 ×86.1
princeton-nlp/CoFi-SST2-s60SST-260.1%2.1 ×93.0
princeton-nlp/CoFi-SST2-s95SST-294.5%12.2 ×90.4
princeton-nlp/CoFi-SQuAD-s60SQuAD59.8%2.0 ×89.1
princeton-nlp/CoFi-SQuAD-s93SQuAD92.4%8.7 ×82.6
princeton-nlp/CoFi-RTE-s60RTE60.2%2.0 x72.6
princeton-nlp/CoFi-RTE-s96RTE96.2%12.8 x66.1
princeton-nlp/CoFi-CoLA-s60CoLA60.4%2.0 x60.4
princeton-nlp/CoFi-CoLA-s95CoLA95.1%12.3 x38.9
princeton-nlp/CoFi-MRPC-s60MRPC61.5%2.0 x86.8
princeton-nlp/CoFi-MRPC-s95MRPC94.9%12.2 x83.6

You can use these models with the huggingface interface:

from models.modeling_bert import CoFiBertForSequenceClassification
model = CoFiBertForSequenceClassification.from_pretrained("princeton-nlp/CoFi-MNLI-s95") 
output = model(**inputs)

Train CoFiPruning

In the following section, we provide instructions on training CoFi with our code.

Requirements

Try runing the following script to install the dependencies.

Please define a lower version of transformers, because the latest version seems seems do not have hf_bucket_url in transformers.file_utils

pip install -r requirements.txt

Training

Training scripts

We provide example training scripts for training with CoFiPruning with different combination of training units and objectives in scripts/run_CoFi.sh. The script only supports single-GPU training and we explain the arguments in following:

After pruning the model, the same script could be used for further fine-tuning the pruned model with following arguments:

An example for training (pruning) is as follows:

TASK=MNLI
SUFFIX=sparsity0.95
EX_CATE=CoFi
PRUNING_TYPE=structured_heads+structured_mlp+hidden+layer
SPARSITY=0.95
DISTILL_LAYER_LOSS_ALPHA=0.9
DISTILL_CE_LOSS_ALPHA=0.1
LAYER_DISTILL_VERSION=4
SPARSITY_EPSILON=0.01

bash scripts/run_CoFi.sh $TASK $SUFFIX $EX_CATE $PRUNING_TYPE $SPARSITY [DISTILLATION_PATH] $DISTILL_LAYER_LOSS_ALPHA $DISTILL_CE_LOSS_ALPHA $LAYER_DISTILL_VERSION $SPARSITY_EPSILON

An example for fine_tuning after pruning is as follows:

PRUNED_MODEL_PATH=$proj_dir/$TASK/$EX_CATE/${TASK}_${SUFFIX}/best
PRUNING_TYPE=None # Setting the pruning type to be None for standard fine-tuning.
LEARNING_RATE=3e-5

bash scripts/run_CoFi.sh $TASK $SUFFIX $EX_CATE $PRUNING_TYPE $SPARSITY [DISTILLATION_PATH] $DISTILL_LAYER_LOSS_ALPHA $DISTILL_CE_LOSS_ALPHA $LAYER_DISTILL_VERSION $SPARSITY_EPSILON [PRUNED_MODEL_PATH] $LEARNING_RATE

The training process will save the model with the best validation accuracy under $PRUNED_MODEL_PATH/best. And you can use the evaluation.py script for evaluation.

Evaluation

Our pruned models are served on Huggingface's model hub. You can use the script evalution.py to get the sparsity, inference time and development set results of a pruned model.

python evaluation.py [TASK] [MODEL_NAME_OR_DIR]

An example use of evaluating a sentence classification model is as follows:

python evaluation.py MNLI princeton-nlp/CoFi-MNLI-s95 

The expected output of the model is as follows:

Task: MNLI
Model path: princeton-nlp/CoFi-MNLI-s95
Model size: 4920106
Sparsity: 0.943
mnli/acc: 0.8055
seconds/example: 0.010151

Hyperparameters

We use the following hyperparamters for training CoFiPruning:

GLUE (small)GLUE (large)SQuAD
Batch size323216
Pruning learning rate2e-52e-53e-5
Fine-tuning learning rate1e-5, 2e-5, 3e-51e-5, 2e-5, 3e-51e-5, 2e-5, 3e-5
Layer distill. alpha0.9, 0.7, 0.50.9, 0.7, 0.50.9, 0.7, 0.5
Cross entropy distill. alpha0.1, 0.3, 0.50.1, 0.3, 0.50.1, 0.3, 0.5
Pruning epochs1002020
Pre-finetuning epochs411
Sparsity warmup epochs2022
Finetuning epochs202020

GLUE (small) denotes the GLUE tasks with a relatively smaller size including CoLA, STS-B, MRPC and RTE and GLUE (large) denotes the rest of the GLUE tasks including SST-2, MNLI, QQP and QNLI. Note that hyperparameter search is essential for small-sized datasets but is less important for large-sized datasets.

Bugs or Questions?

If you have any questions related to the code or the paper, feel free to email Mengzhou (mengzhou@princeton.edu) and Zexuan (zzhong@princeton.edu). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!

Citation

Please cite our paper if you use CoFiPruning in your work:

@inproceedings{xia2022structured,
   title={Structured Pruning Learns Compact and Accurate Models},
   author={Xia, Mengzhou and Zhong, Zexuan and Chen, Danqi},
   booktitle={Association for Computational Linguistics (ACL)},
   year={2022}
}