Home

Awesome

Awesome Pruning Awesome

Visitors

Awesome resources in deep neural network pruning. This collection is inspired by he-y/Awesome-Pruning.

[Note: You are welcome to create pool requests and add more interesting papers.]

SectionYear of Publication
Conference Publications2024 2023 2022 2021 2020 2019 2018 2017
Journal Publications2024 2023 2022 2021 2020
Survey Articles2020~2023
Other Publications2022~2023
Pruning Software and Toolbox2019~2023
SymbolMeaning
UUnstructured or Weight Pruning
SStructured or Filter or Channel Pruning
AOfficial or Author Implementation
OUnofficial or 3rd Party Implementation

Conference Publications

<h3 align="center">2024</h3>

VenueTitleTypeCode
ICLRTowards Meta-Pruning via Optimal TransportSPyTorch[A]
ICLRTowards Energy Efficient Spiking Neural Networks: An Unstructured Pruning FrameworkUPyTorch[A]
ICLRMasks, Signs, And Learning Rate RewindingSPyTorch[A]
ICLRScaling Laws for Sparsely-Connected Foundation ModelsSPyTorch[A]
ICLRSparse Model Soups: A Recipe for Improved Pruning via Model AveragingS
ICLRAdaptive Sharpness-Aware Pruning for Robust Sparse NetworksS
ICLRWhat Makes a Good Prune? Maximal Unstructured Pruning for Maximal Cosine SimilarityUPyTorch[A]
ICLRIn defense of parameter sharing for model-compressionS/U
ICLRECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language ModelsU
ICLRData-independent Module-aware Pruning for Hierarchical Vision TransformersSPyTorch[A]
ICLRSWAP: Sparse Entropic Wasserstein Regression for Robust Network PruningS
ICLRSparse Weight Averaging with Multiple Particles for Iterative Magnitude PruningU
ICLRSynergistic Patch Pruning for Vision Transformer: Unifying Intra- & Inter-Layer Patch ImportanceS
ICLRFedP3: Federated Personalized and Privacy-friendly Network Pruning under Model HeterogeneityS
ICLRThe Need for Speed: Pruning Transformers with One RecipeSPyTorch[A]
ICLRSAS: Structured Activation SparsificationSPyTorch[A]
CVPROrthCaps: An Orthogonal CapsNet with Sparse Attention Routing and PruningSPyTorch[A]
CVPRZero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained TransformersSPyTorch[A]
CVPRFinding Lottery Tickets in Vision Models via Data-driven Spectral Foresight PruningSPyTorch[A]
CVPRBilevelPruning: Unified Dynamic and Static Channel Pruning for Convolutional Neural NetworksS
CVPRFedMef: Towards Memory-efficient Federated Dynamic PruningS
CVPRResource-Efficient Transformer Pruning for Finetuning of Large ModelsS
CVPRDevice-Wise Federated Network PruningS
CVPRAuto-Train-Once: Controller Network Guided Automatic Network Pruning from ScratchS
CVPRJointly Training and Pruning CNNs via Learnable Agent Guidance and AlignmentS
CVPRDiversity-aware Channel Pruning for StyleGAN CompressionSPyTorch[A]
CVPRMADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language TransformerSPyTorch[A]
AAAIDynamic Feature Pruning and Consolidation for Occluded Person Re-IdentificationS
AAAIREPrune: Channel Pruning via Kernel Representative SelectionS
AAAIRevisiting Gradient Pruning: A Dual Realization for Defending against Gradient AttacksS
AAAIIRPruneDet: Efficient Infrared Small Target Detection via Wavelet Structure-Regularized Soft Channel PruningS
AAAIEPSD: Early Pruning with Self-Distillation for Efficient Model CompressionS
WACVPruning from Scratch via Shared Pruning Module and Nuclear norm-based RegularizationSPyTorch[A]
WACVTowards Better Structured Pruning Saliency by Reorganizing ConvolutionSPyTorch[A]
WACVTorque based Structured Pruning for Deep Neural NetworkS
WACVRevisiting Token Pruning for Object Detection and Instance SegmentationSPyTorch[A]
WACVToken Fusion: Bridging the Gap Between Token Pruning and Token MergingS
WACVPATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion AttacksS

<h3 align="center">2023</h3>

VenueTitleTypeCode
NIPSDiff-Pruning: Structural Pruning for Diffusion ModelsSPyTorch[A]
NIPSLLM-Pruner: On the Structural Pruning of Large Language ModelsSPyTorch[A]
ICCVAutomatic Network Pruning via Hilbert-Schmidt Independence Criterion Lasso under Information Bottleneck PrincipleSPyTorch[A]
ICCVUnified Data-Free Compression: Pruning and Quantization without Fine-TuningSPyTorch[A]
ICCVStructural Alignment for Network Pruning through Partial RegularizationS
ICCVDifferentiable Transportation PruningS
ICCVDynamic Token Pruning in Plain Vision Transformers for Semantic SegmentationSPyTorch[A]
ICCVTowards Fairness-aware Adversarial Network PruningS
ICCVEfficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural NetworksSPyTorch[A]
CVPRDepGraph: Towards Any Structural PruningSPyTorch[A]
CVPRX-Pruner: eXplainable Pruning for Vision TransformersU/S
CVPRJoint Token Pruning and Squeezing Towards More Aggressive Compression of Vision TransformersSPyTorch[A]
CVPRGlobal Vision Transformer Pruning with Hessian-Aware SaliencyS
CVPRCP3: Channel Pruning Plug-in for Point-based NetworksS
CVPRTraining Debiased Subnetworks With Contrastive Weight PruningU
CVPRPruning Parameterization With Bi-Level Optimization for Efficient Semantic Segmentation on the EdgeS
CVPRStructural Alignment for Network Pruning through Partial RegularizationSPyTorch[A]
ICLRJaxPruner: A concise library for sparsity researchU/SPyTorch[A]
ICLROTOv2: Automatic, Generic, User-FriendlySPyTorch[A]
ICLRHow I Learned to Stop Worrying and Love RetrainingUPyTorch[A]
ICLRToken Merging: Your ViT But Faster U/SPyTorch[A]
ICLRRevisiting Pruning at Initialization Through the Lens of Ramanujan GraphsUPyTorch[A] (soon...)
ICLRUnmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?U
ICLRNTK-SAP: Improving neural network pruning by aligning training dynamicsU
ICLRDFPC: Data flow driven pruning of coupled channels without dataSPyTorch[A]
ICLRTVSPrune - Pruning Non-discriminative filters via Total Variation separability of intermediate representations without fine tuningSPyTorch[A]
ICLRPruning Deep Neural Networks from a Sparsity PerspectiveUPyTorch[A]
ICLRA Unified Framework of Soft Threshold PruningUPyTorch[A]
WACVCalibrating Deep Neural Networks Using Explicit Regularisation and Dynamic Data PruningS
WACVAttend Who Is Weak: Pruning-Assisted Medical Image Localization Under Sophisticated and Implicit ImbalancesS
ICASSPWHC: Weighted Hybrid Criterion for Filter Pruning on Convolutional Neural NetworksSPyTorch[A]
<h3 align="center">2022</h3>
VenueTitleTypeCode
CVPRInterspace Pruning: Using Adaptive Filter Representations To Improve Training of Sparse CNNsU
CVPRRevisiting Random Channel Pruning for Neural Network CompressionSPyTorch[A] (soon...)
CVPRFire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask PredictionSPyTorch[A]
CVPRWhen to Prune? A Policy towards Early Structural PruningS
CVPRDreaming to Prune Image Deraining NetworksS
ICLRSOSP: Efficiently Capturing Global Correlations by Second-Order Structured PruningS
ICLRLearning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No RetrainingUPyTorch[A]
ICLRRevisit Kernel Pruning with Lottery Regulated Grouped ConvolutionsSPyTorch[A]
ICLRDual Lottery Ticket HypothesisUPyTorch[A]
NIPSSAViT: Structure-Aware Vision Transformer Pruning via Collaborative OptimizationSPyTorch[A](soon...)
NIPSStructural Pruning via Latency-Saliency KnapsackSPyTorch[A]
ACCVFilter Pruning via Automatic Pruning Rate Search⋆S
ACCVNetwork Pruning via Feature Shift MinimizationSPyTorch[A]
ACCVLightweight Alpha Matting Network Using Distillation-Based Channel PruningSPyTorch[A]
ACCVAdaptive FSP : Adaptive Architecture Search with Filter Shape PruningS
ECCVSoft Masking for Cost-Constrained Channel PruningSPyTorch[A]
WACVHessian-Aware Pruning and Optimal Neural ImplantSPyTorch[A]
WACVPPCD-GAN: Progressive Pruning and Class-Aware Distillation for Large-Scale Conditional GANs CompressionS
WACVChannel Pruning via Lookahead Search Guided Reinforcement LearningS
WACVEZCrop: Energy-Zoned Channels for Robust Output PruningSPyTorch[A]
ICIPOne-Cycle Pruning: Pruning Convnets With Tight Training BudgetU
ICIPRAPID: A Single Stage Pruning FrameworkU
ICIPThe Rise of the Lottery Heroes: Why Zero-Shot Pruning is HardU
ICIPTruncated Lottery Ticket for Deep PruningU
ICIPWhich Metrics For Network Pruning: Final Accuracy? or Accuracy Drop?S/U
ISMSIStructured Pruning with Automatic Pruning Rate Derivation for Image Processing Neural NetworksS
<h3 align="center">2021</h3>
VenueTitleTypeCode
ICLRNeural Pruning via Growing RegularizationSPyTorch[A]
ICLRNetwork Pruning That Matters: A Case Study on Retraining VariantsSPyTorch[A]
ICLRLayer-adaptive Sparsity for the Magnitude-based PruningUPyTorch[A]
NIPSOnly Train Once: A One-Shot Neural Network Training And Pruning FrameworkSPyTorch[A]
CVPRNPAS: A Compiler-Aware Framework of Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile AccelerationS
CVPRNetwork Pruning via Performance MaximizationS
CVPRConvolutional Neural Network Pruning With Structural Redundancy Reduction*S
CVPRManifold Regularized Dynamic Network PruningSPyTorch[A]
CVPRJoint-DetNAS: Upgrade Your Detector With NAS, Pruning and Dynamic DistillationS
ICCVResRep: Lossless CNN Pruning via Decoupling Remembering and ForgettingS
ICCVAchieving On-Mobile Real-Time Super-Resolution With Neural Architecture and Pruning SearchS
ICCVGDP: Stabilized Neural Network Pruning via Gates With Differentiable Polarization*S
WACVHolistic Filter Pruning for Efficient Deep Neural NetworksS
ICMLAccelerate CNNs from Three Dimensions: A Comprehensive Pruning FrameworkS
ICMLGroup Fisher Pruning for Practical Network CompressionSPyTorch[A]
<h3 align="center">2020</h3>
VenueTitleTypeCode
CVPRHRank: Filter Pruning using High-Rank Feature MapSPyTorch[A]
CVPRTowards efficient model compression via learned global rankingSPyTorch[A]
CVPRLearning Filter Pruning Criteria for Deep Convolutional Neural Networks AccelerationS
CVPRGroup Sparsity: The Hinge Between Filter Pruning and Decomposition for Network CompressionSPyTorch[A]
CVPRAPQ: Joint Search for Network Architecture, Pruning and Quantization PolicySPyTorch[A]
ICLRBudgeted Training: Rethinking Deep Neural Network Training Under Resource ConstraintsU
MLSysShrinkbench: What is the State of Neural Network Pruning?PyTorch[A]
BMBSSimilarity Based Filter Pruning for Efficient Super-Resolution ModelsS
<h3 align="center">2019</h3>
VenueTitleTypeCode
CVPRFilter Pruning via Geometric Median for Deep Convolutional Neural Networks AccelerationSPyTorch[A]
CVPRVariational Convolutional Neural Network PruningS
CVPRTowards Optimal Structured CNN Pruning via Generative Adversarial LearningSPyTorch[A]
CVPRPartial Order Pruning: For Best Speed/Accuracy Trade-Off in Neural Architecture SearchSPyTorch[A]
CVPRImportance Estimation for Neural Network PruningSPyTorch[A]
ICLRThe Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural NetworksUPyTorch[A]
ICLRSNIP: Single-shot Network Pruning based on Connection SensitivityUTensorflow[A]
ICCVMetaPruning: Meta-Learning for Automatic Neural Network Channel PruningSPyTorch[A]
ICCVAccelerate CNN via Recursive Bayesian PruningS
<h3 align="center">2018</h3>
VenueTitleTypeCode
CVPRPackNet: Adding Multiple Tasks to a Single Network by Iterative PruningSPyTorch[A]
CVPRNISP: Pruning Networks Using Neuron Importance Score PropagationS
ICIPOnline Filter Clustering and Pruning for Efficient ConvnetsS
IJCAISoft Filter Pruning for Accelerating Deep Convolutional Neural NetworksSPyTorch[A]
<h3 align="center">2017</h3>
VenueTitleTypeCode
CVPRDesigning Energy-Efficient Convolutional Neural Networks Using Energy-Aware PruningS
ICLRPruning Filters for Efficient ConvNetsSPyTorch[O]
ICCVChannel Pruning for Accelerating Very Deep Neural NetworksSPyTorch[A]
ICCVThiNet: A Filter Level Pruning Method for Deep Neural Network CompressionSCaffe[A]
ICCVLearning Efficient Convolutional Networks Through Network SlimmingSPyTorch[A]

Journal Publications

<h3 align="center">2024</h3>
JournalTitleTypeCode
Neural NetworksEfficient tensor decomposition-based filter pruningSPyTorch[A]
IEEE Trans. NNLSEnhanced Network Compression Through Tensor Decompositions and PruningSPyTorch[A]
IEEE Transactions on Artificial IntelligenceDistilled Gradual Pruning with Pruned Fine-tuningUPyTorch[A]
<h3 align="center">2023</h3>
JournalTitleTypeCode
IEEE Trans Circuits Syst Video TechnolDCFP: Distribution Calibrated Filter Pruning for Lightweight and Accurate Long-tail Semantic SegmentationS
IEEE Internet Things J.SNPF: Sensitiveness Based Network Pruning Framework for Efficient Edge ComputingS
IEEE Trans. NNLSManipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNNS
IEEE Trans. NNLSBlock-Wise Partner Learning for Model CompressionSPyTorch[A]
IEEE Trans. NNLSHierarchical Threshold Pruning Based on Uniform Response CriterionS
IEEE Trans. NNLSCATRO: Channel Pruning via Class-Aware Trace Ratio OptimizationS
IEEE Trans. NNLSAdaptive Filter Pruning via Sensitivity FeedbackS
NeurocomputingFilter pruning with uniqueness mechanism in the frequency domain for efficient neural networksS
IEEE Trans. PAMICompact Neural Network via Stacking Hybrid UnitsS
IEEE Trans. PAMIPerformance-aware Approximation of Global Channel Pruning for Multitask CNNsSPyTorch[A]
IEEE Trans. PAMIAdaptive Search-and-Training for Robust and Efficient Network PruningS
Image Vis. Comput.Loss-aware automatic selection of structured pruning criteria for deep neural network accelerationSPyTorch[A]
Comput. Vis. Image Underst.Feature independent Filter Pruning by Successive Layers analysisS
IEEE AccessDifferentiable Neural Architecture, Mixed Precision and Accelerator Co-SearchS
<h3 align="center">2022</h3>
JournalTitleTypeCode
IEEE Trans. Image Process.Efficient Layer Compression Without PruningS
IEEE Trans. PAMILearning to Explore Distillability and Sparsability: A Joint Framework for Model CompressionS
IEEE Trans. PAMI1xN Pattern for Pruning Convolutional Neural NetworksSPyTorch[A]
IEEE Trans. NNLSFilter Pruning by Switching to Neighboring CNNs With Good AttributeS
IEEE Trans. NNLSModel Pruning Enables Efficient Federated Learning on Edge DevicesS
IEEE Trans. NNLSDAIS: Automatic Channel Pruning via Differentiable Annealing Indicator SearchS
IEEE Trans. NNLSNetwork Pruning Using Adaptive Exemplar FiltersSPyTorch[A]
IEEE Trans. NNLSCarrying Out CNN Channel Pruning in a White BoxSPyTorch[A]
IEEE Trans. NNLSPruning Networks With Cross-Layer Ranking & k-Reciprocal Nearest FiltersSPyTorch[A]
IEEE Trans. NNLSFilter Sketch for Network PruningSPyTorch[A]
NeurocomputingFPFS: Filter-level pruning via distance weight measuring filter similarityS
NeurocomputingRUFP: Reinitializing unimportant filters for soft pruningS
Neural NetwHRel: Filter pruning based on High Relevance between activation maps and class labelsSPyTorch[A]*
Comput. Intell. Neurosci.Differentiable Network Pruning via Polarization of Probabilistic Channelwise Soft MasksS
J. Syst. Archit.Optimizing deep neural networks on intelligent edge accelerators via flexible-rate filter pruningS
Appl. Sci.Magnitude and Similarity Based Variable Rate Filter Pruning for Efficient Convolution Neural NetworksSPyTorch[A]
SensorsFilter Pruning via Measuring Feature Map InformationS
IEEE AccessAutomated Filter Pruning Based on High-Dimensional Bayesian OptimizationS
IEEE Signal Process. Lett.A Low-Complexity Modified ThiNet Algorithm for Pruning Convolutional Neural NetworksS
<h3 align="center">2021</h3>
JournalTitleTypeCode
IEEE Trans. PAMIDiscrimination-Aware Network Pruning for Deep Model CompressionSPyTorch[A]~
<h3 align="center">2020</h3>
JournalTitleTypeCode
IEEE Trans. NNLSEDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network CompressionS
IEEE AccessFilter Pruning Without Damaging Networks CapacityS
ElectronicsPruning Convolutional Neural Networks with an Attention Mechanism for Remote Sensing Image ClassificationS

Survey Articles

YearVenueTitle
2023Artif. Intell. Rev.Deep neural network pruning method based on sensitive layers and reinforcement learning
2023arVixA Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations
2023arVixStructured Pruning for Deep Convolutional Neural Networks: A survey
2022ElectronicsA Survey on Efficient Convolutional Neural Networks and Hardware Acceleration
2022I-SMACA Survey on Filter Pruning Techniques for Optimization of Deep Neural Networks
2021JMLRSparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
2021NeurocomputingPruning and quantization for deep neural network acceleration: A survey
2020IEEE AccessMethods for Pruning Deep Neural Networks

Other Publications

YearVenueTitleCode
2023arVixWhy is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network PruningPyTorch[A](soon...)
2023arVixTen Lessons We Have Learned in the New "Sparseland": A Short Handbook for Sparse Neural Network Researchers
2022ICMLTutorial -- Sparsity in Deep Learning: Pruning and growth for efficient inference and training

Pruning Software and Toolbox

YearTitleTypeCode
2023UPop: Unified and Progressive Pruning for Compressing Vision-Language TransformersSPyTorch[A]
2023DepGraph: Towards Any Structural PruningSPyTorch[A]
2023Torch-PruningSPyTorch[A]
2023JaxPruner: JaxPruner: A concise library for sparsity researchU/SPyTorch[A]
2022FasterAI: Prune and Distill your models with FastAI and PyTorchUPyTorch[A]
2022Simplify: A Python library for optimizing pruned neural networksPyTorch[A]
2021PyTorchViz [A small package to create visualizations of PyTorch execution graphs]PyTorch[A]
2020What is the State of Neural Network Pruning?S/UPyTorch[A]
2019Official PyTorch Pruning ToolS/UPyTorch[A]