Home

Awesome

Awesome-Efficient-LLM

A curated list for Efficient Large Language Models

Full List

Please check out all the papers by selecting the sub-area you're interested in. On this main page, only papers released in the past 90 days are shown.

🚀 Updates

💮 Contributing

If you'd like to include your paper, or need to update any details such as conference information or code URLs, please feel free to submit a pull request. You can generate the required markdown format for each paper by filling in the information in generate_item.py and execute python generate_item.py. We warmly appreciate your contributions to this list. Alternatively, you can email me with the links to your paper and code, and I would add your paper to the list at my earliest convenience.

:star: Recommended Paper

For each topic, we have curated a list of recommended papers that have garnered a lot of GitHub stars or citations.

Paper from Sep 2, 2024 - Now (see Full List from May 22, 2023 here)

Quick Link

Network Pruning / Sparsity

Title & AuthorsIntroductionLinks
Star Publish Type <br> :star: SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot <br> Elias Frantar, Dan Alistarh<img width="522" alt="image" src="figures/sparsegpt.png">Github paper
Star Publish Type <br> :star: LLM-Pruner: On the Structural Pruning of Large Language Models <br> Xinyin Ma, Gongfan Fang, Xinchao Wang<img width="561" alt="image" src="figures/llm_pruner.png">Github paper
Star Publish Type <br> :star: A Simple and Effective Pruning Approach for Large Language Models <br> Mingjie Sun, Zhuang Liu, Anna Bair, J. Zico Kolter<img width="1002" alt="image" src="https://user-images.githubusercontent.com/20168304/245999360-f951de47-269d-491d-826a-8e6d85627849.png">Github <br> Paper
Star Publish Type <br> :star: Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning <br> Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, Danqi Chen<img width="1002" alt="image" src="figures/LLM-shearing.png">Github <br> Paper
Efficient LLM Inference using Dynamic Input Pruning and Cache-Aware Masking <br> Marco Federici, Davide Belli, Mart van Baalen, Amir Jalalirad, Andrii Skliar, Bence Major, Markus Nagel, Paul Whatmough<img width="1002" alt="image" src="https://arxiv.org/html/2412.01380v1/x1.png">Paper
Puzzle: Distillation-Based NAS for Inference-Optimized LLMs <br> Akhiad Bercovich, Tomer Ronen, Talor Abramovich, Nir Ailon, Nave Assaf, Mohammad Dabbah et al<img width="1002" alt="image" src="https://arxiv.org/html/2411.19146v2/x1.png">Paper
Star<br>Reassessing Layer Pruning in LLMs: New Insights and Methods <br> Yao Lu, Hao Cheng, Yujie Fang, Zeyu Wang, Jiaheng Wei, Dongwei Xu, Qi Xuan, Xiaoniu Yang, Zhaowei Zhu<img width="1002" alt="image" src="https://github.com/yaolu-zjut/Navigation-LLM-layer-pruning/raw/main/framework.JPG">Github <br> Paper
Layer Importance and Hallucination Analysis in Large Language Models via Enhanced Activation Variance-Sparsity <br> Zichen Song, Sitan Huang, Yuxin Wu, Zhongfeng Kang<img width="1002" alt="image" src="https://arxiv.org/html/2411.10069v1/x1.png">Paper
StarPublish<br>AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment <br> Yonggan Fu, Zhongzhi Yu, Junwei Li, Jiayi Qian, Yongan Zhang, Xiangchi Yuan, Dachuan Shi, Roman Yakunin, Yingyan Celine Lin<img width="1002" alt="image" src="https://arxiv.org/html/2411.10606v1/x2.png">Github <br> Paper
Scaling Law for Post-training after Model Pruning <br> Xiaodong Chen, Yuxuan Hu, Jing Zhang, Xiaokang Zhang, Cuiping Li, Hong ChenPaper
Star<br>DRPruning: Efficient Large Language Model Pruning through Distributionally Robust Optimization <br> Hexuan Deng, Wenxiang Jiao, Xuebo Liu, Min Zhang, Zhaopeng Tu<img width="1002" alt="image" src="https://github.com/hexuandeng/DRPruning/raw/main/pic/main.png">Github <br> Paper
Star<br>Sparsing Law: Towards Large Language Models with Greater Activation Sparsity <br> Yuqi Luo, Chenyang Song, Xu Han, Yingfa Chen, Chaojun Xiao, Zhiyuan Liu, Maosong Sun<img width="1002" alt="image" src="https://github.com/thunlp/SparsingLaw/raw/master/figs/sample.jpg">Github <br> Paper
AVSS: Layer Importance Evaluation in Large Language Models via Activation Variance-Sparsity Analysis <br> Zichen Song, Yuxin Wu, Sitan Huang, Zhongfeng Kang<img width="1002" alt="image" src="https://arxiv.org/html/2411.02117v1/x1.png">Paper
Tailored-LLaMA: Optimizing Few-Shot Learning in Pruned LLaMA Models with Task-Specific Prompts <br> Danyal Aftab, Steven Davy<img width="1002" alt="image" src="https://arxiv.org/html/2410.19185v1/x1.png">Paper
Star<br>LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment <br> Ge Yang, Changyi He, Jinyang Guo, Jianyu Wu, Yifu Ding, Aishan Liu, Haotong Qin, Pengliang Ji, Xianglong Liu<img width="1002" alt="image" src="https://github.com/AboveParadise/LLMCBench/raw/main/figs/f1.png">Github <br> Paper
Beyond 2:4: exploring V:N:M sparsity for efficient transformer inference on GPUs <br> Kang Zhao, Tao Yuan, Han Bao, Zhenfeng Su, Chang Gao, Zhaofeng Sun, Zichen Liang, Liping Jing, Jianfei Chen<img width="1002" alt="image" src="https://arxiv.org/html/2410.16135v1/x1.png">Paper
Star<br>EvoPress: Towards Optimal Dynamic Model Compression via Evolutionary Search <br> Oliver Sieberling, Denis Kuznedelev, Eldar Kurtic, Dan Alistarh<img width="1002" alt="image" src="figures/evopress.png">Github <br> Paper
FedSpaLLM: Federated Pruning of Large Language Models <br> Guangji Bai, Yijiang Li, Zilinghan Li, Liang Zhao, Kibaek Kim<img width="1002" alt="image" src="https://arxiv.org/html/2410.14852v1/x1.png">Paper
Star<br>Pruning Foundation Models for High Accuracy without Retraining <br> Pu Zhao, Fei Sun, Xuan Shen, Pinrui Yu, Zhenglun Kong, Yanzhi Wang, Xue LinGithub <br> Paper
Self-calibration for Language Model Quantization and Pruning <br> Miles Williams, George Chrysostomou, Nikolaos Aletras<img width="1002" alt="image" src="https://arxiv.org/html/2410.17170v1/x1.png">Paper
Beware of Calibration Data for Pruning Large Language Models <br> Yixin Ji, Yang Xiang, Juntao Li, Qingrong Xia, Ping Li, Xinyu Duan, Zhefeng Wang, Min ZhangPaper
StarPublish<br>AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models <br> Haiquan Lu, Yefan Zhou, Shiwei Liu, Zhangyang Wang, Michael W. Mahoney, Yaoqing Yang<img width="1002" alt="image" src="https://arxiv.org/html/2410.10912v1/x1.png">Github <br> Paper
Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix <br> Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song, Yufa Zhou<img width="1002" alt="image" src="https://arxiv.org/html/2410.11261v1/x1.png">Paper
Publish<br>DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models <br> Shangqian Gao, Chi-Heng Lin, Ting Hua, Tang Zheng, Yilin Shen, Hongxia Jin, Yen-Chang Hsu<img width="1002" alt="image" src="https://arxiv.org/html/2410.11988v1/x1.png">Paper
Publish<br>Self-Data Distillation for Recovering Quality in Pruned Large Language Models <br> Vithursan Thangarasa, Ganesh Venkatesh, Nish Sinnadurai, Sean Lie<img width="1002" alt="image" src="https://arxiv.org/html/2410.09982v2/x1.png">Paper
LLM-Rank: A Graph Theoretical Approach to Pruning Large Language Models <br> David Hoffmann, Kailash Budhathoki, Matthaeus Kleindessner<img width="1002" alt="image" src="https://arxiv.org/html/2410.13299v1/extracted/5931028/img/llm_to_mlp.png">Paper
StarPublish<br>Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning <br> Abhinav Bandari, Lu Yin, Cheng-Yu Hsieh, Ajay Kumar Jaiswal, Tianlong Chen, Li Shen, Ranjay Krishna, Shiwei Liu<img width="1002" alt="image" src="https://arxiv.org/html/2410.07461v1/x1.png">Github <br> Paper
Mitigating Copy Bias in In-Context Learning through Neuron Pruning <br> Ameen Ali, Lior Wolf, Ivan Titov<img width="1002" alt="image" src="figures/copy_icl.png">Paper
Star Publish <br>MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models <br> Gongfan Fang, Hongxu Yin, Saurav Muralidharan, Greg Heinrich, Jeff Pool, Jan Kautz, Pavlo Molchanov, Xinchao Wang<img width="302" alt="image" src="https://github.com/NVlabs/MaskLLM/blob/main/assets/animation-LQ.gif">Github <br> Paper
Publish<br>Search for Efficient Large Language Models <br> Xuan Shen, Pu Zhao, Yifan Gong, Zhenglun Kong, Zheng Zhan, Yushu Wu, Ming Lin, Chao Wu, Xue Lin, Yanzhi Wang<img width="1002" alt="image" src="https://arxiv.org/html/2409.17372v1/x2.png">Paper
Star<br>CFSP: An Efficient Structured Pruning Framework for LLMs with Coarse-to-Fine Activation Information <br> Yuxin Wang, Minghua Ma, Zekun Wang, Jingchang Chen, Huiming Fan, Liping Shan, Qing Yang, Dongliang Xu, Ming Liu, Bing Qin<img width="1002" alt="image" src="https://arxiv.org/html/2409.13199v1/x1.png">Github <br> Paper
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition <br> Stephen Zhang, Vardan PapyanPaper
KVPruner: Structural Pruning for Faster and Memory-Efficient Large Language Models <br> Bo Lv, Quan Zhou, Xuanang Ding, Yan Wang, Zeming Ma<img width="302" alt="image" src="https://arxiv.org/html/2409.11057v1/x2.png">Paper
Evaluating the Impact of Compression Techniques on Task-Specific Performance of Large Language Models <br> Bishwash Khanal, Jeffery M. Capone<img width="1002" alt="image" src="https://arxiv.org/html/2409.11233v1/extracted/5860861/images/GPT4template.jpg">Paper
STUN: Structured-Then-Unstructured Pruning for Scalable MoE Pruning <br> Jaeseong Lee, seung-won hwang, Aurick Qiao, Daniel F Campos, Zhewei Yao, Yuxiong He<img width="1002" alt="image" src="https://arxiv.org/html/2409.06211v1/x1.png">Paper
Star<br>PAT: Pruning-Aware Tuning for Large Language Models <br> Yijiang Liu, Huanrui Yang, Youxin Chen, Rongyu Zhang, Miao Wang, Yuan Du, Li Du<img width="1002" alt="image" src="figures/PAT.png">Github <br> Paper

Knowledge Distillation

Title & AuthorsIntroductionLinks
:star: Knowledge Distillation of Large Language Models <br> Yuxian Gu, Li Dong, Furu Wei, Minlie Huang<img width="1002" alt="image" src="https://github.com/microsoft/LMOps/blob/main/minillm/figures/method.png">Github <br> Paper
Improving Mathematical Reasoning Capabilities of Small Language Models via Feedback-Driven Distillation <br> Xunyu Zhu, Jian Li, Can Ma, Weiping Wang<img width="1002" alt="image" src="https://arxiv.org/html/2411.14698v1/x1.png">Paper
Star<br>Generative Context Distillation <br> Haebin Shin, Lei Ji, Yeyun Gong, Sungdong Kim, Eunbi Choi, Minjoon Seo<img width="1002" alt="image" src="figures/GCD.png">Github <br> Paper
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models <br> Jahyun Koo, Yerin Hwang, Yongil Kim, Taegwan Kang, Hyunkyung Bae, Kyomin Jung<img width="1002" alt="image" src="figures/switch.png">Paper
Star<br>Beyond Autoregression: Fast LLMs via Self-Distillation Through Time <br> Justin Deschenaux, Caglar Gulcehre<img width="1002" alt="image" src="https://arxiv.org/html/2410.21035v1/x3.png">Github <br> Paper
Pre-training Distillation for Large Language Models: A Design Space Exploration <br> Hao Peng, Xin Lv, Yushi Bai, Zijun Yao, Jiajie Zhang, Lei Hou, Juanzi LiPaper
Star<br>MiniPLM: Knowledge Distillation for Pre-Training Language Models <br> Yuxian Gu, Hao Zhou, Fandong Meng, Jie Zhou, Minlie Huang<img width="1002" alt="image" src="https://github.com/thu-coai/MiniPLM/raw/main/figures/method.png">Github <br> Paper
Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling <br> Wenda Xu, Rujun Han, Zifeng Wang, Long T. Le, Dhruv Madeka, Lei Li, William Yang Wang, Rishabh Agarwal, Chen-Yu Lee, Tomas Pfister<img width="1002" alt="image" src="https://arxiv.org/html/2410.11325v1/x2.png">Paper
Evolutionary Contrastive Distillation for Language Model Alignment <br> Julian Katz-Samuels, Zheng Li, Hyokun Yun, Priyanka Nigam, Yi Xu, Vaclav Petricek, Bing Yin, Trishul Chilimbi<img width="1002" alt="image" src="https://arxiv.org/html/2410.07513v1/extracted/5913898/figures/main_alg_v3.png">Paper
BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data <br> Jean-Loup Tastet, Inar TimiryasovPaper
EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models <br> Hossein Rajabzadeh, Aref Jafari, Aman Sharma, Benyamin Jami, Hyock Ju Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh<img width="1002" alt="image" src="https://arxiv.org/html/2409.14595v1/extracted/5869635/Figs/shared_attention_diagram.png">Paper
Star<br>SKIntern: Internalizing Symbolic Knowledge for Distilling Better CoT Capabilities into Small Language Models <br> Huanxuan Liao, Shizhu He, Yupu Hao, Xiang Li, Yuanzhe Zhang, Kang Liu, Jun Zhao<img width="1002" alt="image" src="https://arxiv.org/html/2409.13183v1/x1.png">Github <br> Paper
StarPublish<br>LLMR: Knowledge Distillation with a Large Language Model-Induced Reward <br> Dongheng Li, Yongchang Hao, Lili Mou<img width="1002" alt="image" src="https://github.com/MANGA-UOFA/Prompt-LLMR/blob/main/LLMR-main/assets/model.png">Github <br> Paper
Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models <br> Jun Rao, Xuebo Liu, Zepeng Lin, Liang Ding, Jing Li, Dacheng Tao<img width="1002" alt="image" src="https://arxiv.org/html/2409.12512v1/x1.png">Paper
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights <br> Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger<img width="1002" alt="image" src="https://arxiv.org/html/2409.12586v1/x2.png">Paper
Star<br>The Mamba in the Llama: Distilling and Accelerating Hybrid Models <br> Junxiong Wang, Daniele Paliotta, Avner May, Alexander M. Rush, Tri Dao<img width="1002" alt="image" src="https://arxiv.org/html/2408.15237v1/x1.png">Github <br> Paper

Quantization

Title & AuthorsIntroductionLinks
StarPublish<br> :star: GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers <br> Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh<img width="202" alt="image" src="figures/GPTQ.png">Github <br> Paper
StarPublish <br> :star: SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models <br> Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, Song Han<img width="1002" alt="image" src="https://github.com/mit-han-lab/smoothquant/blob/main/figures/intuition.png">Github <br> Paper
Star <br> :star: AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration <br> Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, Song Han<img width="1002" alt="image" src="https://github.com/mit-han-lab/llm-awq/blob/main/figures/overview.png">Github <br> Paper
StarPublish<br> :star: OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models <br> Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, Ping Luo<img width="1002" alt="image" src="figures/omniquant.png">Github <br> Paper
SKIM: Any-bit Quantization Pushing The Limits of Post-Training Quantization <br> Runsheng Bai, Qiang Liu, Bo Liu<img width="1002" alt="image" src="https://arxiv.org/html/2412.04180v1/x2.png">Paper
CPTQuant -- A Novel Mixed Precision Post-Training Quantization Techniques for Large Language Models <br> Amitash Nanda, Sree Bhargavi Balija, Debashis Sahoo<img width="1002" alt="image" src="https://arxiv.org/html/2412.03599v1/x3.png">Paper
Publish<br>Anda: Unlocking Efficient LLM Inference with a Variable-Length Grouped Activation Data Format <br> Chao Fang, Man Shi, Robin Geens, Arne Symons, Zhongfeng Wang, Marian Verhelst<img width="1002" alt="image" src="https://arxiv.org/html/2411.15982v1/x1.png">Paper
MixPE: Quantization and Hardware Co-design for Efficient LLM Inference <br> Yu Zhang, Mingzi Wang, Lancheng Zou, Wulong Liu, Hui-Ling Zhen, Mingxuan Yuan, Bei Yu<img width="1002" alt="image" src="https://arxiv.org/html/2411.16158v1/x5.png">Paper
StarPublish<br>BitMoD: Bit-serial Mixture-of-Datatype LLM Acceleration <br> Yuzong Chen, Ahmed F. AbouElhamayed, Xilai Dai, Yang Wang, Marta Andronic, George A. Constantinides, Mohamed S. Abdelfattah<img width="1002" alt="image" src="https://arxiv.org/html/2411.11745v1/x5.png">Github <br> Paper
AMXFP4: Taming Activation Outliers with Asymmetric Microscaling Floating-Point for 4-bit LLM Inference <br> Janghwan Lee, Jiwoong Park, Jinseok Kim, Yongjik Kim, Jungju Oh, Jinwook Oh, Jungwook Choi<img width="1002" alt="image" src="figures/AMXFP4.png">Paper
Bi-Mamba: Towards Accurate 1-Bit State Space Models <br> Shengkun Tang, Liqun Ma, Haonan Li, Mingjie Sun, Zhiqiang Shen<img width="1002" alt="image" src="https://arxiv.org/html/2411.11843v1/x2.png">Paper
"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization <br> Eldar Kurtic, Alexandre Marques, Shubhra Pandit, Mark Kurtz, Dan AlistarhPaper
GWQ: Gradient-Aware Weight Quantization for Large Language Models <br> Yihua Shao, Siyu Liang, Xiaolin Lin, Zijian Ling, Zixian Zhu et al<img width="1002" alt="image" src="https://arxiv.org/html/2411.00850v1/x2.png">Paper
A Comprehensive Study on Quantization Techniques for Large Language Models <br> Jiedong Lang, Zhehao Guo, Shuyu HuangPaper
BitNet a4.8: 4-bit Activations for 1-bit LLMs <br> Hongyu Wang, Shuming Ma, Furu Wei<img width="1002" alt="image" src="https://arxiv.org/html/2411.04965v1/x1.png">Paper
Star<br>TesseraQ: Ultra Low-Bit LLM Post-Training Quantization with Block Reconstruction <br> Yuhang Li, Priyadarshini Panda<img width="1002" alt="image" src="https://github.com/Intelligent-Computing-Lab-Yale/TesseraQ/raw/main/imgs/tesseraq.png">Github <br> Paper
Star<br>BitStack: Fine-Grained Size Control for Compressed Large Language Models in Variable Memory Environments <br> Xinghao Wang, Pengyu Wang, Bo Wang, Dong Zhang, Yunhua Zhou, Xipeng Qiu<img width="1002" alt="image" src="https://github.com/xinghaow99/BitStack/raw/main/assets/bitstack.png">Github <br> Paper
The Impact of Inference Acceleration Strategies on Bias of LLMs <br> Elisabeth Kirsten, Ivan Habernal, Vedant Nanda, Muhammad Bilal ZafarPaper
Understanding the difficulty of low-precision post-training quantization of large language models <br> Zifei Xu, Sayeh Sharify, Wanzin Yazar, Tristan Webb, Xin Wang<img width="1002" alt="image" src="https://arxiv.org/html/2410.14570v1/extracted/5935973/figures/fig1.png">Paper
Star<br>1-bit AI Infra: Part 1.1, Fast and Lossless BitNet b1.58 Inference on CPUs <br> Jinheng Wang, Hansong Zhou, Ting Song, Shaoguang Mao, Shuming Ma, Hongyu Wang, Yan Xia, Furu Wei<img width="1002" alt="image" src="https://arxiv.org/html/2410.16144v2/x1.png">Github <br> Paper
QuAILoRA: Quantization-Aware Initialization for LoRA <br> Neal Lawton, Aishwarya Padmakumar, Judith Gaspers, Jack FitzGerald, Anoop Kumar, Greg Ver Steeg, Aram GalstyanPaper
Evaluating Quantized Large Language Models for Code Generation on Low-Resource Language Benchmarks <br> Enkhbold NyamsurenPaper
Star <br> :star: SqueezeLLM: Dense-and-Sparse Quantization <br>Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W. Mahoney, Kurt Keutzer<img width="1102" alt="image" src="figures/SqueezeLLM.png">Github <br> Paper
Pyramid Vector Quantization for LLMs <br> Tycho F. A. van der Ouderaa, Maximilian L. Croci, Agrin Hilmkil, James Hensman<img width="1002" alt="image" src="https://arxiv.org/html/2410.16926v1/x1.png">Paper
SeedLM: Compressing LLM Weights into Seeds of Pseudo-Random Generators <br> Rasoul Shafipour, David Harrison, Maxwell Horton, Jeffrey Marker, Houman Bedayat, Sachin Mehta, Mohammad Rastegari, Mahyar Najibi, Saman Naderiparizi<img width="1002" alt="image" src="https://arxiv.org/html/2410.10714v2/x1.png">Paper
Star<br>FlatQuant: Flatness Matters for LLM Quantization <br> Yuxuan Sun, Ruikang Liu, Haoli Bai, Han Bao, Kang Zhao, Yuening Li, Jiaxin Hu, Xianzhi Yu, Lu Hou, Chun Yuan, Xin Jiang, Wulong Liu, Jun Yao<img width="1002" alt="image" src="https://arxiv.org/html/2410.09426v1/x11.png">Github <br> Paper
Star<br>SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs <br> Mohammad Mozaffari, Maryam Mehri Dehnavi<img width="1002" alt="image" src="https://arxiv.org/html/2410.09615v1/x1.png">Github <br> Paper
Scaling laws for post-training quantized large language models <br> Zifei Xu, Alexander Lan, Wanzin Yazar, Tristan Webb, Sayeh Sharify, Xin Wang<img width="202" alt="image" src="https://arxiv.org/html/2410.12119v1/extracted/5929616/figures/fig_12.png">Paper
Continuous Approximations for Improving Quantization Aware Training of LLMs <br> He Li, Jianhang Hong, Yuanzhuo Wu, Snehal Adbol, Zonglin LiPaper
Star<br>DAQ: Density-Aware Post-Training Weight-Only Quantization For LLMs <br> Yingsong Luo, Ling Chen<img width="1002" alt="image" src="https://arxiv.org/html/2410.12187v2/x1.png">Github <br> Paper
Star<br>Quamba: A Post-Training Quantization Recipe for Selective State Space Models <br> Hung-Yueh Chiang, Chi-Chih Chang, Natalia Frumkin, Kai-Chiang Wu, Diana Marculescu<img width="1002" alt="image" src="https://arxiv.org/html/2410.13229v1/extracted/5933363/figures/outliers.png">Github <br> Paper
AsymKV: Enabling 1-Bit Quantization of KV Cache with Layer-Wise Asymmetric Quantization Configurations <br> Qian Tao, Wenyuan Yu, Jingren Zhou<img width="1002" alt="image" src="https://arxiv.org/html/2410.13212v1/extracted/5933292/figures/kvmix.png">Paper
Channel-Wise Mixed-Precision Quantization for Large Language Models <br> Zihan Chen, Bike Xie, Jundong Li, Cong Shen<img width="1002" alt="image" src="https://arxiv.org/html/2410.13056v1/x1.png">Paper
Progressive Mixed-Precision Decoding for Efficient LLM Inference <br> Hao Mark Chen, Fuwen Tan, Alexandros Kouris, Royson Lee, Hongxiang Fan, Stylianos I. Venieris<img width="1002" alt="image" src="https://arxiv.org/html/2410.13461v1/x4.png">Paper
Star<br>EXAQ: Exponent Aware Quantization For LLMs Acceleration <br> Moran Shkolnik, Maxim Fishman, Brian Chmiel, Hilla Ben-Yaacov, Ron Banner, Kfir Yehuda Levy<img width="1002" alt="image" src="figures/EXAQ.png">Github <br> Paper
Star<br>PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs <br> Mengzhao Chen, Yi Liu, Jiahao Wang, Yi Bin, Wenqi Shao, Ping Luo<img width="1002" alt="image" src="https://arxiv.org/html/2410.05265v1/x1.png">Github <br> Paper
Star<br> :star: Extreme Compression of Large Language Models via Additive Quantization <br> Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh<img width="1002" alt="image" src="figures/MCQ.png">Github <br> Paper
Scaling Laws for Mixed quantization in Large Language Models <br> Zeyu Cao, Cheng Zhang, Pedro Gimenes, Jianqiao Lu, Jianyi Cheng, Yiren Zhao<img width="1002" alt="image" src="figures/LLM-MPQ.png">Paper
PalmBench: A Comprehensive Benchmark of Compressed Large Language Models on Mobile Platforms <br> Yilong Li, Jingyu Liu, Hao Zhang, M Badri Narayanan, Utkarsh Sharma, Shuai Zhang, Pan Hu, Yijing Zeng, Jayaram Raghuram, Suman Banerjee<img width="1002" alt="image" src="figures/PalmBench.png">Paper
CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression <br> Wenyuan Liu, Xindian Ma, Peng Zhang, Yan Wang<img width="1002" alt="image" src="https://arxiv.org/html/2410.07505v1/x1.png">Paper
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration <br> Jintao Zhang, Jia wei, Pengle Zhang, Jun Zhu, Jianfei Chen<img width="1002" alt="image" src="https://arxiv.org/html/2410.02367v1/x5.png">Paper
Addition is All You Need for Energy-efficient Language Models <br> Hongyin Luo, Wei Sun<img width="1002" alt="image" src="https://arxiv.org/html/2410.00907v1/x2.png">Paper
Star<br>VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models <br> Yifei Liu, Jicheng Wen, Yang Wang, Shengyu Ye, Li Lyna Zhang, Ting Cao, Cheng Li, Mao Yang<img width="1002" alt="image" src="figures/VPTQ.png">Github <br> Paper
Star<br>INT-FlashAttention: Enabling Flash Attention for INT8 Quantization <br> Shimao Chen, Zirui Liu, Zhiying Wu, Ce Zheng, Peizhuang Cong, Zihan Jiang, Yuhan Wu, Lei Su, Tong Yang<img width="1002" alt="image" src="https://arxiv.org/html/2409.16997v2/x1.png">Github <br> Paper
Accumulator-Aware Post-Training Quantization <br> Ian Colbert, Fabian Grob, Giuseppe Franco, Jinjie Zhang, Rayan Saab<img width="1002" alt="image" src="https://arxiv.org/html/2409.17092v1/x2.png">Paper
StarPublish<br>DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs <br> Haokun Lin, Haobo Xu, Yichen Wu, Jingzhi Cui, Yingtao Zhang, Linzhan Mou, Linqi Song, Zhenan Sun, Ying Wei<img width="1002" alt="image" src="https://github.com/Hsu1023/DuQuant/blob/main/imgs/duquant.png">Github <br> Paper
A Comprehensive Evaluation of Quantized Instruction-Tuned Large Language Models: An Experimental Analysis up to 405B <br> Jemin Lee, Sihyeong Park, Jinse Kwon, Jihun Oh, Yongin Kwon<img width="1002" alt="image" src="https://arxiv.org/html/2409.11055v1/x1.png">Paper
The Uniqueness of LLaMA3-70B with Per-Channel Quantization: An Empirical Study <br> Minghai Qin<img width="1002" alt="image" src="https://arxiv.org/html/2408.15301v1/extracted/5797059/LaTeX/figures/llama3-70b-series-accuracy.png">Paper

Inference Acceleration

Title & AuthorsIntroductionLinks
StarPublish<br> :star: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time <br> Zichang Liu, Jue WANG, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, Beidi Chen<img width="202" alt="image" src="figures/DajeVu.png">Github <br> Paper
Star <br> :star: SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification <br> Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Rae Ying Yee Wong, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, Zhihao Jia<img width="600" alt="image" src="https://github.com/flexflow/FlexFlow/blob/inference/img/overview.png">Github <br> paper
Star<br> :star: Efficient Streaming Language Models with Attention Sinks <br> Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis<img width="1002" alt="image" src="https://github.com/mit-han-lab/streaming-llm/blob/main/figures/schemes.png">Github <br> Paper
Star<br>:star: EAGLE: Lossless Acceleration of LLM Decoding by Feature Extrapolation <br> Yuhui Li, Chao Zhang, and Hongyang Zhang<img width="302" alt="image" src="https://github.com/SafeAILab/EAGLE/blob/main/figs/fig1.png">Github <br> Blog
Star<br> :star: Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads <br> Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao<img width="1002" alt="image" src="https://arxiv.org/html/2401.10774v1/x1.png">Github <br> Paper
Speculative Decoding with CTC-based Draft Model for LLM Inference Acceleration <br> Zhuofan Wen, Shangtong Gui, Yang Feng<img width="302" alt="image" src="https://arxiv.org/html/2412.00061v1/x1.png">Paper
PLD+: Accelerating LLM inference by leveraging Language Model Artifacts <br> Shwetha Somasundaram, Anirudh Phukan, Apoorv Saxena<img width="1002" alt="image" src="https://arxiv.org/html/2412.01447v1/x1.png">Paper
Publish<br>FastDraft: How to Train Your Draft <br> Ofir Zafrir, Igor Margulis, Dorin Shteyman, Guy BoudoukhPaper
Star<br>SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents <br> Dawei Li, Zhen Tan, Peijia Qian, Yifan Li, Kumar Satvik Chaudhary, Lijie Hu, Jiayi Shen<img width="1002" alt="image" src="figures/SMoA.png">Github <br> Paper
The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation <br> Lawrence Stewart, Matthew Trager, Sujan Kumar Gonugondla, Stefano SoattoPaper
Accelerated AI Inference via Dynamic Execution Methods <br> Haim Barad, Jascha Achterberg, Tien Pei Chou, Jean YuPaper
SuffixDecoding: A Model-Free Approach to Speeding Up Large Language Model Inference <br> Gabriele Oliaro, Zhihao Jia, Daniel Campos, Aurick Qiao<img width="1002" alt="image" src="https://arxiv.org/html/2411.04975v1/x1.png">Paper
Dynamic Strategy Planning for Efficient Question Answering with Large Language Models <br> Tanmay Parekh, Pradyot Prakash, Alexander Radovic, Akshay Shekher, Denis Savenkov<img width="1002" alt="image" src="https://arxiv.org/html/2410.23511v1/x1.png">Paper
Star<br>MagicPIG: LSH Sampling for Efficient LLM Generation <br> Zhuoming Chen, Ranajoy Sadhukhan, Zihao Ye, Yang Zhou, Jianyu Zhang, Niklas Nolte, Yuandong Tian, Matthijs Douze, Leon Bottou, Zhihao Jia, Beidi Chen<img width="1002" alt="image" src="https://arxiv.org/html/2410.16179v2/x15.png">Github <br> Paper
Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition <br> Artem Basharin, Andrei Chertkov, Ivan Oseledets<img width="1002" alt="image" src="figures/canonical_tensor_decomposition.png">Paper
Efficient Inference for Augmented Large Language Models <br> Rana Shahout, Cong Liang, Shiji Xin, Qianru Lao, Yong Cui, Minlan Yu, Michael Mitzenmacher<img width="1002" alt="image" src="https://arxiv.org/html/2410.18248v1/extracted/5949546/figures/illustrations/api_example_png.png">Paper
Star<br>Dynamic Vocabulary Pruning in Early-Exit LLMs <br> Jort Vincenti, Karim Abdel Sadek, Joan Velja, Matteo Nulli, Metod Jazbec<img width="1002" alt="image" src="https://github.com/MatteoNulli/Vocabulary_pruning/raw/main/src/images/final_nips.svg">Github <br> Paper
Star<br>CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Activation <br> Qinsi Wang, Saeed Vahidian, Hancheng Ye, Jianyang Gu, Jianyi Zhang, Yiran Chen<img width="1002" alt="image" src="https://wangqinsi1.github.io/coreinfer_page/static/images/overview.png">Github <br> Paper
Star<br>DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads <br> Guangxuan Xiao, Jiaming Tang, Jingwei Zuo, Junxian Guo, Shang Yang, Haotian Tang, Yao Fu, Song Han<img width="1002" alt="image" src="https://github.com/mit-han-lab/duo-attention/raw/main/figures/method1.jpg">Github <br> Paper
DySpec: Faster Speculative Decoding with Dynamic Token Tree Structure <br> Yunfan Xiong, Ruoyu Zhang, Yanzeng Li, Tianhao Wu, Lei Zou<img width="1002" alt="image" src="https://arxiv.org/html/2410.11744v1/extracted/5913908/figures/tree_bold.png">Paper
QSpec: Speculative Decoding with Complementary Quantization Schemes <br> Juntao Zhao, Wenhao Lu, Sheng Wang, Lingpeng Kong, Chuan Wu<img width="1002" alt="image" src="https://arxiv.org/html/2410.11305v1/x1.png">Paper
TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention <br> Lijie Yang, Zhihao Zhang, Zhuofu Chen, Zikun Li, Zhihao Jia<img width="1002" alt="image" src="https://arxiv.org/html/2410.05076v1/x2.png">Paper
ParallelSpec: Parallel Drafter for Efficient Speculative Decoding <br> Zilin Xiao, Hongming Zhang, Tao Ge, Siru Ouyang, Vicente Ordonez, Dong Yu<img width="1002" alt="image" src="https://arxiv.org/html/2410.05589v1/x1.png">Paper
Star<br>SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration <br> Heming Xia, Yongqi Li, Jun Zhang, Cunxiao Du, Wenjie Li<img width="1002" alt="image" src="https://github.com/hemingkx/SWIFT/raw/main/assets/swift.png">Github <br> Paper
Star<br>TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text <br> Songshuo Lu, Hua Wang, Yutian Rong, Zhi Chen, Yaohua Tang<img width="1002" alt="image" src="https://github.com/MooreThreads/TurboRAG/raw/main/assets/image/TurboRAG.png">Github <br> Paper
A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts <br> Suyu Ge, Xihui Lin, Yunan Zhang, Jiawei Han, Hao Peng<img width="1002" alt="image" src="https://arxiv.org/html/2410.01485v1/extracted/5895696/figures/model_architecture.png">Paper
Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations <br> Amey Agrawal, Junda Chen, Íñigo Goiri, Ramachandran Ramjee, Chaojie Zhang, Alexey Tumanov, Esha Choukse<img width="1002" alt="image" src="https://arxiv.org/html/2409.17264v1/x14.png">Paper
Star<br>Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction <br> Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, Yingyu Liang, Shafiq Joty<img width="1002" alt="image" src="https://arxiv.org/html/2409.17422v1/x1.png">Github <br> Paper
Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference <br> Zongyue Qin, Zifan He, Neha Prakriya, Jason Cong, Yizhou Sun<img width="1002" alt="image" src="https://arxiv.org/html/2409.16560v1/x6.png">Paper
Star<br>CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs <br> Junlin Lv, Yuan Feng, Xike Xie, Xin Jia, Qirong Peng, Guiming Xie<img width="1002" alt="image" src="https://arxiv.org/html/2409.12490v1/x2.png">Github <br> Paper
RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval <br> Di Liu, Meng Chen, Baotong Lu, Huiqiang Jiang, Zhenhua Han, Qianxi Zhang, Qi Chen, Chengruidong Zhang, Bailu Ding, Kai Zhang, Chen Chen, Fan Yang, Yuqing Yang, Lili Qiu<img width="1002" alt="image" src="https://arxiv.org/html/2409.10516v2/x4.png">Paper
Star<br>Sirius: Contextual Sparsity with Correction for Efficient LLMs <br> Yang Zhou, Zhuoming Chen, Zhaozhuo Xu, Victoria Lin, Beidi Chen<img width="1002" alt="image" src="https://infini-ai-lab.github.io/Sirius/static/images/methodsillustration.png">Github <br> Paper
Star<br>OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs <br> Jintian Zhang, Cheng Peng, Mengshu Sun, Xiang Chen, Lei Liang, Zhiqiang Zhang, Jun Zhou, Huajun Chen, Ningyu Zhang<img width="1002" alt="image" src="https://github.com/zjunlp/OneGen/blob/main/assets/train.jpg">Github <br> Paper
Path-Consistency: Prefix Enhancement for Efficient Inference in LLM <br> Jiace Zhu, Yingtao Shen, Jie Zhao, An Zou<img width="1002" alt="image" src="https://arxiv.org/html/2409.01281v1/x1.png">Paper
Boosting Lossless Speculative Decoding via Feature Sampling and Partial Alignment Distillation <br> Lujun Gui, Bin Xiao, Lei Su, Weipeng Chen<img width="1002" alt="image" src="https://arxiv.org/html/2408.15562v1/extracted/5818109/structure_0.png">Paper

Efficient MOE

Title & AuthorsIntroductionLinks
Star<br>:star: Fast Inference of Mixture-of-Experts Language Models with Offloading <br> Artyom Eliseev, Denis Mazur<img width="1002" alt="image" src="figures/mixtral_offloading.png">Github <br> Paper
Star<br>Condense, Don't Just Prune: Enhancing Efficiency and Performance in MoE Layer Pruning <br> Mingyu Cao, Gen Li, Jie Ji, Jiaqi Zhang, Xiaolong Ma, Shiwei Liu, Lu Yin<img width="1002" alt="image" src="https://arxiv.org/html/2412.00069v1/x2.png">Github <br> Paper
Mixture of Cache-Conditional Experts for Efficient Mobile Device Inference <br> Andrii Skliar, Ties van Rozendaal, Romain Lepert, Todor Boinovski, Mart van Baalen, Markus Nagel, Paul Whatmough, Babak Ehteshami Bejnordi<img width="1002" alt="image" src="https://arxiv.org/html/2412.00099v1/x1.png">Paper
Star<br>MoNTA: Accelerating Mixture-of-Experts Training with Network-Traffc-Aware Parallel Optimization <br> Jingming Guo, Yan Liu, Yu Meng, Zhiwei Tao, Banglan Liu, Gang Chen, Xiang Li<img width="1002" alt="image" src="https://arxiv.org/html/2411.00662v1/x1.png">Github <br> Paper
Star<br>MoE-I2: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition <br> Cheng Yang, Yang Sui, Jinqi Xiao, Lingyi Huang, Yu Gong, Yuanlin Duan, Wenqi Jia, Miao Yin, Yu Cheng, Bo Yuan<img width="1002" alt="image" src="https://arxiv.org/html/2411.01016v1/x1.png">Github <br> Paper
HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference <br> Peng Tang, Jiacheng Liu, Xiaofeng Hou, Yifei Pu, Jing Wang, Pheng-Ann Heng, Chao Li, Minyi Guo<img width="1002" alt="image" src="https://arxiv.org/html/2411.01433v2/extracted/5980843/figures/overview5.png">Paper
ProMoE: Fast MoE-based LLM Serving using Proactive Caching <br> Xiaoniu Song, Zihang Zhong, Rong Chen<img width="1002" alt="image" src="https://arxiv.org/html/2410.22134v1/x1.png">Paper
ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference <br> Xin He, Shunkang Zhang, Yuxin Wang, Haiyan Yin, Zihao Zeng, Shaohuai Shi, Zhenheng Tang, Xiaowen Chu, Ivor Tsang, Ong Yew Soon<img width="202" alt="image" src="https://arxiv.org/html/2410.17954v1/x1.png">Paper
EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference <br> Yulei Qian, Fengcun Li, Xiangyang Ji, Xiaoyu Zhao, Jianchao Tan, Kefeng Zhang, Xunliang CaiPaper
Star<br>MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More <br> Wei Huang, Yue Liao, Jianhui Liu, Ruifei He, Haoru Tan, Shiming Zhang, Hongsheng Li, Si Liu, Xiaojuan Qi<img width="1002" alt="image" src="https://github.com/Aaronhuang-778/MC-MoE/raw/main/imgs/WX20241009-191322@2x.png">Github <br> Paper

Efficient Architecture of LLM

Title & AuthorsIntroductionLinks
Star<br>:star: MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT <br> Omkar Thawakar, Ashmal Vayani, Salman Khan, Hisham Cholakal, Rao M. Anwer, Michael Felsberg, Tim Baldwin, Eric P. Xing, Fahad Shahbaz Khan<img width="402" alt="image" src="https://github.com/mbzuai-oryx/MobiLlama/raw/main/images/mobillama_generation.gif">Github <br> Paper <br>Model
Star<br>:star: Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length <br> Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou<img width="1002" alt="image" src="figures/megalodon.png">Github <br> Paper
Taipan: Efficient and Expressive State Space Language Models with Selective Attention <br> Chien Van Nguyen, Huy Huu Nguyen, Thang M. Pham, Ruiyi Zhang, Hanieh Deilamsalehy, Puneet Mathur, Ryan A. Rossi, Trung Bui, Viet Dac Lai, Franck Dernoncourt, Thien Huu Nguyen<img width="1002" alt="image" src="https://arxiv.org/html/2410.18572v1/x2.png">Paper
Star<br>SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs <br> Yizhao Gao, Zhichen Zeng, Dayou Du, Shijie Cao, Hayden Kwok-Hay So, Ting Cao, Fan Yang, Mao Yang<img width="202" alt="image" src="https://arxiv.org/html/2410.13276v1/x4.png">Github <br> Paper
Star<br>Basis Sharing: Cross-Layer Parameter Sharing for Large Language Model Compression <br> Jingcun Wang, Yu-Guang Chen, Ing-Chao Lin, Bing Li, Grace Li Zhang<img width="1002" alt="image" src="https://arxiv.org/html/2410.03765v1/x1.png">Github <br> Paper
Rodimus*: Breaking the Accuracy-Efficiency Trade-Off with Efficient Attentions <br> Zhihao He, Hang Yu, Zi Gong, Shizhan Liu, Jianguo Li, Weiyao Lin<img width="1002" alt="image" src="https://arxiv.org/html/2410.06577v1/x3.png">Paper

KV Cache Compression

Title & AuthorsIntroductionLinks
:star: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs <br> Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao<img width="1002" alt="image" src="figures/FastGen.png">Paper
ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression <br> Guangda Liu, Chengwei Li, Jieru Zhao, Chenqi Zhang, Minyi Guo<img width="1002" alt="image" src="https://arxiv.org/html/2412.03213v1/x1.png">Paper
Unifying KV Cache Compression for Large Language Models with LeanKV <br> Yanqi Zhang, Yuwei Hu, Runyuan Zhao, John C.S. Lui, Haibo Chen<img width="1002" alt="image" src="https://arxiv.org/html/2412.03131v1/x2.png">Paper
Compressing KV Cache for Long-Context LLM Inference with Inter-Layer Attention Similarity <br> Da Ma, Lu Chen, Situo Zhang, Yuxun Miao, Su Zhu, Zhi Chen, Hongshen Xu, Hanqi Li, Shuai Fan, Lei Pan, Kai Yu<img width="1002" alt="image" src="https://arxiv.org/html/2412.02252v1/extracted/6041612/figs/intro.png">Paper
MiniKV: Pushing the Limits of LLM Inference via 2-Bit Layer-Discriminative KV Cache <br> Akshat Sharma, Hangliang Ding, Jianping Li, Neel Dani, Minjia Zhang<img width="1002" alt="image" src="https://arxiv.org/html/2411.18077v2/x1.png">Paper
TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection <br> Wei Wu, Zhuoshi Pan, Chao Wang, Liyi Chen, Yunchu Bai, Kun Fu, Zheng Wang, Hui Xiong<img width="1002" alt="image" src="https://arxiv.org/html/2411.02886v1/x1.png">Paper
Star<br>Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning <br> Yu Fu, Zefan Cai, Abedelkadir Asi, Wayne Xiong, Yue Dong, Wen Xiao<img width="1002" alt="image" src="https://github.com/FYYFU/HeadKV/raw/main/main.png">Github <br> Paper
Star<br>BUZZ: Beehive-structured Sparse KV Cache with Segmented Heavy Hitters for Efficient LLM Inference <br> Junqi Zhao, Zhijin Fang, Shu Li, Shaohui Yang, Shichao He<img width="1002" alt="image" src="https://arxiv.org/html/2410.23079v1/x1.png">Github <br> Paper
Star<br>A Systematic Study of Cross-Layer KV Sharing for Efficient LLM Inference <br> You Wu, Haoyi Wu, Kewei Tu<img width="202" alt="image" src="figures/cross-layer-kv.png">Github <br> Paper
Lossless KV Cache Compression to 2% <br> Zhen Yang, J.N.Han, Kan Wu, Ruobing Xie, An Wang, Xingwu Sun, Zhanhui Kang<img width="1002" alt="image" src="https://arxiv.org/html/2410.15252v1/extracted/5937225/images/CLLA_Overview.png">Paper
MatryoshkaKV: Adaptive KV Compression via Trainable Orthogonal Projection <br> Bokai Lin, Zihao Zeng, Zipeng Xiao, Siqi Kou, Tianqi Hou, Xiaofeng Gao, Hao Zhang, Zhijie Deng<img width="1002" alt="image" src="https://arxiv.org/html/2410.14731v1/x2.png">Paper
Star<br>Residual vector quantization for KV cache compression in large language model <br> Ankur KumarGithub <br> Paper
Star<br>KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing <br> Yifei Yang, Zouying Cao, Qiguang Chen, Libo Qin, Dongjie Yang, Hai Zhao, Zhi Chen<img width="1002" alt="image" src="https://github.com/yangyifei729/KVSharer/raw/main/img/main_fig.jpg">Github <br> Paper
LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy <br> Rongzhi Zhang, Kuang Wang, Liyuan Liu, Shuohang Wang, Hao Cheng, Chao Zhang, Yelong Shen<img width="1002" alt="image" src="figures/LoRC.png">Paper
SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation <br> Aurick Qiao, Zhewei Yao, Samyam Rajbhandari, Yuxiong He<img width="1002" alt="image" src="https://arxiv.org/html/2410.03960v1/x1.png">Paper
Publish<br>Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference <br> Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski, David Tarjan, Edoardo M. Ponti<img width="1002" alt="image" src="figures/DMC.png">Paper
KV-Compress: Paged KV-Cache Compression with Variable Compression Rates per Attention Head <br> Isaac Rehg<img width="1002" alt="image" src="https://arxiv.org/html/2410.00161v1/x5.png">Paper
Star<br>Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference <br> Yuan Feng, Junlin Lv, Yukun Cao, Xike Xie, S. Kevin Zhou<img width="1002" alt="image" src="figures/adakv.png">Github <br> Paper
Star<br>AlignedKV: Reducing Memory Access of KV-Cache with Precision-Aligned Quantization <br> Yifan Tan, Haoze Wang, Chao Yan, Yangdong Deng<img width="1002" alt="image" src="https://arxiv.org/html/2409.16546v1/extracted/5867591/Figure6.png">Github <br> Paper
CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios <br> Luning Wang, Shiyao Li, Xuefei Ning, Zhihang Yuan, Shengen Yan, Guohao Dai, Yu Wang<img width="1002" alt="image" src="https://arxiv.org/html/2409.10593v1/x1.png">Paper
A First Look At Efficient And Secure On-Device LLM Inference Against KV Leakage <br> Huan Yang, Deyu Zhang, Yudong Zhao, Yuanchun Li, Yunxin Liu<img width="1002" alt="image" src="https://arxiv.org/html/2409.04040v1/x3.png">Paper

Text Compression

Title & AuthorsIntroductionLinks
StarPublish<br>:star: LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models <br> Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu<img width="1002" alt="image" src="https://github.com/microsoft/LLMLingua/blob/main/images/LLMLingua_framework.png">Github <br> Paper
Star<br>:star: LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression <br> Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu<img width="1002" alt="image" src="figures/longllmlingua.png">Github <br> Paper
JPPO: Joint Power and Prompt Optimization for Accelerated Large Language Model Services <br> Feiran You, Hongyang Du, Kaibin Huang, Abbas Jamalipour<img width="1002" alt="image" src="https://arxiv.org/html/2411.18010v1/x1.png">Paper
Star<br>Generative Context Distillation <br> Haebin Shin, Lei Ji, Yeyun Gong, Sungdong Kim, Eunbi Choi, Minjoon Seo<img width="1002" alt="image" src="figures/GCD.png">Github <br> Paper
Star<br>MultiTok: Variable-Length Tokenization for Efficient LLMs Adapted from LZW Compression <br> Noel Elias, Homa Esfahanizadeh, Kaan Kale, Sriram Vishwanath, Muriel Medard<img width="1002" alt="image" src="https://arxiv.org/html/2410.21548v1/extracted/5960495/Figures/MultiTok.png">Github <br> Paper
Publish<br>Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability <br> Tsz Ting Chung, Leyang Cui, Lemao Liu, Xinting Huang, Shuming Shi, Dit-Yan Yeung<img width="202" alt="image" src="https://arxiv.org/html/2410.11786v1/x1.png">Paper
Publish<br>From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression <br> Eunseong Choi, Sunkyung Lee, Minjin Choi, June Park, Jongwuk Lee<img width="1002" alt="image" src="https://arxiv.org/html/2410.04139v1/extracted/5902409/Figures/fig_R2C_framework_2col_v4.png">Paper
Perception Compressor:A training-free prompt compression method in long context scenarios <br> Jiwei Tang, Jin Xu, Tingwei Lu, Hai Lin, Yiming Zhao, Hai-Tao Zheng<img width="1002" alt="image" src="https://arxiv.org/html/2409.19272v1/x1.png">Paper
Star<br>FineZip : Pushing the Limits of Large Language Models for Practical Lossless Text Compression <br> Fazal Mittu, Yihuan Bu, Akshat Gupta, Ashok Devireddy, Alp Eren Ozdarendeli, Anant Singh, Gopala Anumanchipalli<img width="1002" alt="image" src="https://arxiv.org/html/2409.17141v1/extracted/5879840/finezip_diagram.png">Github <br> Paper
Star<br>Parse Trees Guided LLM Prompt Compression <br> Wenhao Mao, Chengbin Hou, Tianyu Zhang, Xinyu Lin, Ke Tang, Hairong Lv<img width="1002" alt="image" src="https://arxiv.org/html/2409.15395v1/x1.png">Github <br> Paper
Star<br>AlphaZip: Neural Network-Enhanced Lossless Text Compression <br> Swathi Shree Narashiman, Nitin Chandrachoodan<img width="1002" alt="image" src="https://arxiv.org/html/2409.15046v1/extracted/5873563/images/architecture_bloack_diagram.png">Github <br> Paper
TACO-RL: Task Aware Prompt Compression Optimization with Reinforcement Learning <br> Shivam Shandilya, Menglin Xia, Supriyo Ghosh, Huiqiang Jiang, Jue Zhang, Qianhui Wu, Victor Rühle<img width="1002" alt="image" src="https://arxiv.org/html/2409.13035v2/x1.png">Paper
Efficient LLM Context Distillation <br> Rajesh Upadhayayaya, Zachary Smith, Chritopher Kottmyer, Manish Raj OstiPaper
Star<br>Enhancing and Accelerating Large Language Models via Instruction-Aware Contextual Compression <br> Haowen Hou, Fei Ma, Binwen Bai, Xinxin Zhu, Fei Yu<img width="1002" alt="image" src="https://arxiv.org/html/2408.15491v1/extracted/5817813/arch.png">Github <br> Paper

Low-Rank Decomposition

Title & AuthorsIntroductionLinks
Star<br>Natural GaLore: Accelerating GaLore for memory-efficient LLM Training and Fine-tuning <br> Arijit DasGithub <br> Paper
CompAct: Compressed Activations for Memory-Efficient LLM Training <br> Yara Shamshoum, Nitzan Hodos, Yuval Sieradzki, Assaf Schuster<img width="202" alt="image" src="https://arxiv.org/html/2410.15352v1/x1.png">Paper
Publish<br>ESPACE: Dimensionality Reduction of Activations for Model Compression <br> Charbel Sakr, Brucek Khailany<img width="1002" alt="image" src="figures/ESPACE.png">Paper

Hardware/System/Serving

Title & AuthorsIntroductionLinks
FastSwitch: Optimizing Context Switching Efficiency in Fairness-aware Large Language Model Serving <br> Ao Shen, Zhiyao Li, Mingyu Gao<img width="1002" alt="image" src="https://arxiv.org/html/2411.18424v1/x5.png">Paper
CE-CoLLM: Efficient and Adaptive Large Language Models Through Cloud-Edge Collaboration <br> Hongpeng Jin, Yanzhao Wu<img width="1002" alt="image" src="https://arxiv.org/html/2411.02829v1/extracted/5978301/images/method_overview_sm.png">Paper
Ripple: Accelerating LLM Inference on Smartphones with Correlation-Aware Neuron Management <br> Tuowei Wang, Ruwen Fan, Minxing Huang, Zixu Hao, Kun Li, Ting Cao, Youyou Lu, Yaoxue Zhang, Ju Ren<img width="302" alt="image" src="https://arxiv.org/html/2410.19274v2/x7.png">Paper
Publish<br>ALISE: Accelerating Large Language Model Serving with Speculative Scheduling <br> Youpeng Zhao, Jun Wang<img width="1002" alt="image" src="https://arxiv.org/html/2410.23537v1/extracted/5967257/imgs/b1.png">Paper
EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models <br> Junhao Hu, Wenrui Huang, Haoyi Wang, Weidong Wang, Tiancheng Hu, Qin Zhang, Hao Feng, Xusheng Chen, Yizhou Shan, Tao Xie<img width="202" alt="image" src="https://arxiv.org/html/2410.15332v1/x3.png">Paper
Publish<br>SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training <br> Jinda Jia, Cong Xie, Hanlin Lu, Daoce Wang, Hao Feng, Chengming Zhang, Baixi Sun, Haibin Lin, Zhi Zhang, Xin Liu, Dingwen Tao<img width="1002" alt="image" src="https://arxiv.org/html/2410.15526v1/x2.png">Paper
FastAttention: Extend FlashAttention2 to NPUs and Low-resource GPUs <br> Haoran Lin, Xianzhi Yu, Kang Zhao, Lu Hou, Zongyuan Zhan et al<img width="1002" alt="image" src="https://arxiv.org/html/2410.16663v1/x2.png">Paper
POD-Attention: Unlocking Full Prefill-Decode Overlap for Faster LLM Inference <br> Aditya K Kamath, Ramya Prabhu, Jayashree Mohan, Simon Peter, Ramachandran Ramjee, Ashish Panwar<img width="1002" alt="image" src="https://arxiv.org/html/2410.18038v1/x5.png">Paper
Star<br>TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices <br> Zonghang Li, Wenjiao Feng, Mohsen Guizani, Hongfang Yu<img width="1002" alt="image" src="https://arxiv.org/html/2410.00531v1/x4.png">Github <br> Paper
Publish<br>Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores <br> Shaobo Ma, Chao Fang, Haikuo Shao, Zhongfeng Wang<img width="1002" alt="image" src="https://arxiv.org/html/2409.17870v1/extracted/5882022/figures/bipolar_original2.png">Paper
Publish<br>OPAL: Outlier-Preserved Microscaling Quantization A ccelerator for Generative Large Language Models <br> Jahyun Koo, Dahoon Park, Sangwoo Jung, Jaeha Kung<img width="1002" alt="image" src="https://arxiv.org/html/2409.05902v1/x5.png">Paper
Accelerating Large Language Model Training with Hybrid GPU-based Compression <br> Lang Xu, Quentin Anthony, Qinghua Zhou, Nawras Alnaasan, Radha R. Gulhane, Aamir Shafi, Hari Subramoni, Dhabaleswar K. Panda<img width="1002" alt="image" src="https://arxiv.org/html/2409.02423v1/extracted/5832005/Figures/mzhybrid-3d-rev.png">Paper

Tuning

Title & AuthorsIntroductionLinks
HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization <br> Huaqin Zhao, Jiaxi Li, Yi Pan, Shizhe Liang, Xiaofeng Yang, Wei Liu, Xiang Li, Fei Dou, Tianming Liu, Jin Lu<img width="1002" alt="image" src="https://arxiv.org/html/2411.10696v1/x1.png">Paper
Star<br>Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation <br> Ayan Sengupta, Vaibhav Seth, Arinjay Pathak, Natraj Raman, Sriram Gopalakrishnan, Tanmoy Chakraborty<img width="1002" alt="image" src="https://arxiv.org/html/2411.04358v2/x3.png">Github <br> Paper
Publish<br>MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning <br> Jingfan Zhang, Yi Zhao, Dan Chen, Xing Tian, Huanran Zheng, Wei Zhu<img width="1002" alt="image" src="https://arxiv.org/html/2410.18035v1/extracted/5949512/em_lora_framework.png">Paper
Star<br>RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates <br> Md Kowsher, Tara Esmaeilbeig, Chun-Nam Yu, Mojtaba Soltanalian, Niloofar Yousefi<img width="1002" alt="image" src="https://github.com/Kowsher/RoCoFT/blob/main/figures/rocoft.png">Github <br> Paper
StarPublish<br>Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models <br> Kai Yao, Penlei Gao, Lichun Li, Yuan Zhao, Xiaofeng Wang, Wei Wang, Jianke Zhu<img width="1002" alt="image" src="https://arxiv.org/html/2410.11772v1/x3.png">Github <br> Paper
Publish<br>Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning <br> Nusrat Jahan Prottasha, Asif Mahmud, Md. Shohanur Islam Sobuj, Prakash Bhat, Md Kowsher, Niloofar Yousefi, Ozlem Ozmen Garibay<img width="1002" alt="image" src="https://arxiv.org/html/2410.08598v1/x1.png">Paper
StarPublish<br>QEFT: Quantization for Efficient Fine-Tuning of LLMs <br> Changhun Lee, Jun-gyu Jin, Younghyun Cho, Eunhyeok Park<img width="1002" alt="image" src="https://arxiv.org/html/2410.08661v1/x2.png">Github <br> Paper
StarPublish<br>BIPEFT: Budget-Guided Iterative Search for Parameter Efficient Fine-Tuning of Large Pretrained Language Models <br> Aofei Chang, Jiaqi Wang, Han Liu, Parminder Bhatia, Cao Xiao, Ting Wang, Fenglong Ma<img width="1002" alt="image" src="https://arxiv.org/html/2410.09079v1/x1.png">Github <br> Paper
Star<br>SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers <br> Viktoriia Chekalina, Anna Rudenko, Gleb Mezentsev, Alexander Mikhalev, Alexander Panchenko, Ivan Oseledets<img width="1002" alt="image" src="https://arxiv.org/html/2410.07383v1/x1.png">Github <br> Paper
SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching <br> Tianyi Zhang, Junda Su, Oscar Wu, Zhaozhuo Xu, Anshumali Shrivastava<img width="1002" alt="image" src="https://arxiv.org/html/2410.06364v1/x1.png">Paper
Star<br>Bone: Block Affine Transformation as Parameter Efficient Fine-tuning Methods for Large Language Models <br> Jiale Kang<img width="1002" alt="image" src="https://arxiv.org/html/2409.15371v1/extracted/5865415/imgs/bone-free.png">Github <br> Paper
Enabling Resource-Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines <br> Lei Gao, Amir Ziashahabi, Yue Niu, Salman Avestimehr, Murali Annavaram<img width="1002" alt="image" src="figures/P-RGE.png">Paper

Efficient Training

Title & AuthorsIntroductionLinks
AutoMixQ: Self-Adjusting Quantization for High Performance Memory-Efficient Fine-Tuning <br> Changhai Zhou, Shiyang Zhang, Yuhua Zhou, Zekai Liu, Shichao Weng<img width="1002" alt="image" src="figures/AutoMixQ.png">Paper
StarPublish<br>Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention <br> Xingtai Lv, Ning Ding, Kaiyan Zhang, Ermo Hua, Ganqu Cui, Bowen Zhou<img width="1002" alt="image" src="https://arxiv.org/html/2411.02063v1/x1.png">Github <br> Paper
Less is More: Extreme Gradient Boost Rank-1 Adaption for Efficient Finetuning of LLMs <br> Yifei Zhang, Hao Zhu, Aiwei Liu, Han Yu, Piotr Koniusz, Irwin King<img width="1002" alt="image" src="https://arxiv.org/html/2410.19694v1/x3.png">Paper
Star<br>COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training <br> Haocheng Xi, Han Cai, Ligeng Zhu, Yao Lu, Kurt Keutzer, Jianfei Chen, Song Han<img width="1002" alt="image" src="https://github.com/NVlabs/COAT/blob/main/docs/figs/FP8PrecisionFlow.png">Github <br> Paper
Star<br>BitPipe: Bidirectional Interleaved Pipeline Parallelism for Accelerating Large Models Training <br> Houming Wu, Ling Chen, Wenjie Yu<img width="1002" alt="image" src="https://github.com/wuhouming/BitPipe/raw/main/docs/BitPipe_images/BitPipe-v.svg">Github <br> Paper

Survey (or Benchmark)

Title & AuthorsIntroductionLinks
Closer Look at Efficient Inference Methods: A Survey of Speculative Decoding <br> Hyun Ryu, Eric Kim<img width="1002" alt="image" src="https://arxiv.org/html/2411.13157v1/extracted/6012092/figure2.png">Paper
Star<br>LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators <br> Krishna Teja Chitty-Venkata, Siddhisanket Raskar, Bharat Kale, Farah Ferdaus et alGithub <br> Paper
Star<br>Prompt Compression for Large Language Models: A Survey <br> Zongqian Li, Yinhong Liu, Yixuan Su, Nigel Collier<img width="1002" alt="image" src="https://arxiv.org/html/2410.12388v2/extracted/5933385/Figures/tree_overview.png">Github <br> Paper
Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective <br> Jinhao Li, Jiaming Xu, Shan Huang, Yonghua Chen, Wen Li, Jun Liu, Yaoxiu Lian, Jiayi Pan, Li Ding, Hao Zhou, Guohao Dai<img width="1002" alt="image" src="https://arxiv.org/html/2410.04466v1/x4.png">Paper
A Survey of Low-bit Large Language Models: Basics, Systems, and Algorithms <br> Ruihao Gong, Yifu Ding, Zining Wang, Chengtao Lv, Xingyu Zheng, Jinyang Du, Haotong Qin, Jinyang Guo, Michele Magno, Xianglong Liu<img width="1002" alt="image" src="https://arxiv.org/html/2409.16694v1/x1.png">Paper
Star<br>Contextual Compression in Retrieval-Augmented Generation for Large Language Models: A Survey <br> Sourav Verma<img width="1002" alt="image" src="figures/CCRAG_survey.png">Github <br> Paper
Art and Science of Quantizing Large-Scale Models: A Comprehensive Overview <br> Yanshu Wang, Tong Yang, Xiyan Liang, Guoan Wang, Hanning Lu, Xu Zhe, Yaoming Li, Li WeitaoPaper
Hardware Acceleration of LLMs: A comprehensive survey and comparison <br> Nikoletta Koilia, Christoforos KachrisPaper
A Survey on Symbolic Knowledge Distillation of Large Language Models <br> Kamal Acharya, Alvaro Velasquez, Houbing Herbert Song<img width="1002" alt="image" src="https://arxiv.org/html/2408.10210v1/extracted/5727556/Images/DirectDistillation.png">Paper