Awesome
Awesome-Efficient-LLM
A curated list for Efficient Large Language Models
Full List
- Network Pruning / Sparsity
- Knowledge Distillation
- Quantization
- Inference Acceleration
- Efficient MOE
- Efficient Architecture of LLM
- KV Cache Compression
- Text Compression
- Low-Rank Decomposition
- Hardware / System / Serving
- Tuning
- Efficient Training
- Survey or Benchmark
Please check out all the papers by selecting the sub-area you're interested in. On this main page, only papers released in the past 90 days are shown.
🚀 Updates
- May 29, 2024: We've had this awesome list for a year now :smiling_face_with_three_hearts:!
- Sep 6, 2023: Add a new subdirectory project/ to organize efficient LLM projects.
- July 11, 2023: A new subdirectory efficient_plm/ is created to house papers that are applicable to PLMs.
💮 Contributing
If you'd like to include your paper, or need to update any details such as conference information or code URLs, please feel free to submit a pull request. You can generate the required markdown format for each paper by filling in the information in generate_item.py
and execute python generate_item.py
. We warmly appreciate your contributions to this list. Alternatively, you can email me with the links to your paper and code, and I would add your paper to the list at my earliest convenience.
:star: Recommended Paper
For each topic, we have curated a list of recommended papers that have garnered a lot of GitHub stars or citations.
Paper from August 17, 2024 - Now (see Full List from May 22, 2023 here)
Quick Link
- Network Pruning / Sparsity
- Knowledge Distillation
- Quantization
- Inference Acceleration
- Efficient MOE
- Efficient Architecture of LLM
- KV Cache Compression
- Text Compression
- Low-Rank Decomposition
- Hardware / System / Serving
- Tuning
- Survey
Network Pruning / Sparsity
Title & Authors | Introduction | Links |
---|---|---|
<br> :star: SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot <br> Elias Frantar, Dan Alistarh | <img width="522" alt="image" src="figures/sparsegpt.png"> | Github paper |
<br> :star: LLM-Pruner: On the Structural Pruning of Large Language Models <br> Xinyin Ma, Gongfan Fang, Xinchao Wang | <img width="561" alt="image" src="figures/llm_pruner.png"> | Github paper |
<br> :star: A Simple and Effective Pruning Approach for Large Language Models <br> Mingjie Sun, Zhuang Liu, Anna Bair, J. Zico Kolter | <img width="1002" alt="image" src="https://user-images.githubusercontent.com/20168304/245999360-f951de47-269d-491d-826a-8e6d85627849.png"> | Github <br> Paper |
<br> :star: Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning <br> Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, Danqi Chen | <img width="1002" alt="image" src="figures/LLM-shearing.png"> | Github <br> Paper |
<br>Sparsing Law: Towards Large Language Models with Greater Activation Sparsity <br> Yuqi Luo, Chenyang Song, Xu Han, Yingfa Chen, Chaojun Xiao, Zhiyuan Liu, Maosong Sun | <img width="1002" alt="image" src="https://github.com/thunlp/SparsingLaw/raw/master/figs/sample.jpg"> | Github <br> Paper |
AVSS: Layer Importance Evaluation in Large Language Models via Activation Variance-Sparsity Analysis <br> Zichen Song, Yuxin Wu, Sitan Huang, Zhongfeng Kang | <img width="1002" alt="image" src="https://arxiv.org/html/2411.02117v1/x1.png"> | Paper |
Tailored-LLaMA: Optimizing Few-Shot Learning in Pruned LLaMA Models with Task-Specific Prompts <br> Danyal Aftab, Steven Davy | <img width="1002" alt="image" src="https://arxiv.org/html/2410.19185v1/x1.png"> | Paper |
<br>LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment <br> Ge Yang, Changyi He, Jinyang Guo, Jianyu Wu, Yifu Ding, Aishan Liu, Haotong Qin, Pengliang Ji, Xianglong Liu | <img width="1002" alt="image" src="https://github.com/AboveParadise/LLMCBench/raw/main/figs/f1.png"> | Github <br> Paper |
Beyond 2:4: exploring V:N:M sparsity for efficient transformer inference on GPUs <br> Kang Zhao, Tao Yuan, Han Bao, Zhenfeng Su, Chang Gao, Zhaofeng Sun, Zichen Liang, Liping Jing, Jianfei Chen | <img width="1002" alt="image" src="https://arxiv.org/html/2410.16135v1/x1.png"> | Paper |
<br>EvoPress: Towards Optimal Dynamic Model Compression via Evolutionary Search <br> Oliver Sieberling, Denis Kuznedelev, Eldar Kurtic, Dan Alistarh | <img width="1002" alt="image" src="figures/evopress.png"> | Github <br> Paper |
FedSpaLLM: Federated Pruning of Large Language Models <br> Guangji Bai, Yijiang Li, Zilinghan Li, Liang Zhao, Kibaek Kim | <img width="1002" alt="image" src="https://arxiv.org/html/2410.14852v1/x1.png"> | Paper |
<br>Pruning Foundation Models for High Accuracy without Retraining <br> Pu Zhao, Fei Sun, Xuan Shen, Pinrui Yu, Zhenglun Kong, Yanzhi Wang, Xue Lin | Github <br> Paper | |
Self-calibration for Language Model Quantization and Pruning <br> Miles Williams, George Chrysostomou, Nikolaos Aletras | <img width="1002" alt="image" src="https://arxiv.org/html/2410.17170v1/x1.png"> | Paper |
Beware of Calibration Data for Pruning Large Language Models <br> Yixin Ji, Yang Xiang, Juntao Li, Qingrong Xia, Ping Li, Xinyu Duan, Zhefeng Wang, Min Zhang | Paper | |
<br>AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models <br> Haiquan Lu, Yefan Zhou, Shiwei Liu, Zhangyang Wang, Michael W. Mahoney, Yaoqing Yang | <img width="1002" alt="image" src="https://arxiv.org/html/2410.10912v1/x1.png"> | Github <br> Paper |
Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix <br> Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song, Yufa Zhou | <img width="1002" alt="image" src="https://arxiv.org/html/2410.11261v1/x1.png"> | Paper |
<br>DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models <br> Shangqian Gao, Chi-Heng Lin, Ting Hua, Tang Zheng, Yilin Shen, Hongxia Jin, Yen-Chang Hsu | <img width="1002" alt="image" src="https://arxiv.org/html/2410.11988v1/x1.png"> | Paper |
<br>Self-Data Distillation for Recovering Quality in Pruned Large Language Models <br> Vithursan Thangarasa, Ganesh Venkatesh, Nish Sinnadurai, Sean Lie | <img width="1002" alt="image" src="https://arxiv.org/html/2410.09982v2/x1.png"> | Paper |
LLM-Rank: A Graph Theoretical Approach to Pruning Large Language Models <br> David Hoffmann, Kailash Budhathoki, Matthaeus Kleindessner | <img width="1002" alt="image" src="https://arxiv.org/html/2410.13299v1/extracted/5931028/img/llm_to_mlp.png"> | Paper |
<br>Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning <br> Abhinav Bandari, Lu Yin, Cheng-Yu Hsieh, Ajay Kumar Jaiswal, Tianlong Chen, Li Shen, Ranjay Krishna, Shiwei Liu | <img width="1002" alt="image" src="https://arxiv.org/html/2410.07461v1/x1.png"> | Github <br> Paper |
Mitigating Copy Bias in In-Context Learning through Neuron Pruning <br> Ameen Ali, Lior Wolf, Ivan Titov | <img width="1002" alt="image" src="figures/copy_icl.png"> | Paper |
<br>MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models <br> Gongfan Fang, Hongxu Yin, Saurav Muralidharan, Greg Heinrich, Jeff Pool, Jan Kautz, Pavlo Molchanov, Xinchao Wang | <img width="302" alt="image" src="https://github.com/NVlabs/MaskLLM/blob/main/assets/animation-LQ.gif"> | Github <br> Paper |
<br>Search for Efficient Large Language Models <br> Xuan Shen, Pu Zhao, Yifan Gong, Zhenglun Kong, Zheng Zhan, Yushu Wu, Ming Lin, Chao Wu, Xue Lin, Yanzhi Wang | <img width="1002" alt="image" src="https://arxiv.org/html/2409.17372v1/x2.png"> | Paper |
<br>CFSP: An Efficient Structured Pruning Framework for LLMs with Coarse-to-Fine Activation Information <br> Yuxin Wang, Minghua Ma, Zekun Wang, Jingchang Chen, Huiming Fan, Liping Shan, Qing Yang, Dongliang Xu, Ming Liu, Bing Qin | <img width="1002" alt="image" src="https://arxiv.org/html/2409.13199v1/x1.png"> | Github <br> Paper |
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition <br> Stephen Zhang, Vardan Papyan | Paper | |
KVPruner: Structural Pruning for Faster and Memory-Efficient Large Language Models <br> Bo Lv, Quan Zhou, Xuanang Ding, Yan Wang, Zeming Ma | <img width="302" alt="image" src="https://arxiv.org/html/2409.11057v1/x2.png"> | Paper |
Evaluating the Impact of Compression Techniques on Task-Specific Performance of Large Language Models <br> Bishwash Khanal, Jeffery M. Capone | <img width="1002" alt="image" src="https://arxiv.org/html/2409.11233v1/extracted/5860861/images/GPT4template.jpg"> | Paper |
STUN: Structured-Then-Unstructured Pruning for Scalable MoE Pruning <br> Jaeseong Lee, seung-won hwang, Aurick Qiao, Daniel F Campos, Zhewei Yao, Yuxiong He | <img width="1002" alt="image" src="https://arxiv.org/html/2409.06211v1/x1.png"> | Paper |
<br>PAT: Pruning-Aware Tuning for Large Language Models <br> Yijiang Liu, Huanrui Yang, Youxin Chen, Rongyu Zhang, Miao Wang, Yuan Du, Li Du | <img width="1002" alt="image" src="figures/PAT.png"> | Github <br> Paper |
LLM Pruning and Distillation in Practice: The Minitron Approach <br> Sharath Turuvekere Sreenivas, Saurav Muralidharan, Raviraj Joshi, Marcin Chochowski, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, Pavlo Molchanov | <img width="1002" alt="image" src="https://arxiv.org/html/2408.11796v2/x1.png"> | Paper |
Language-specific Calibration for Pruning Multilingual Language Models <br> Simon Kurz, Zhixue Zhao, Jian-Jia Chen, Lucie Flek | Paper | |
<br>LLM-Barber: Block-Aware Rebuilder for Sparsity Mask in One-Shot for Large Language Models <br> Yupeng Su, Ziyi Guan, Xiaoqun Liu, Tianlai Jin, Dongkuan Wu, Graziano Chesi, Ngai Wong, Hao Yu | <img width="1002" alt="image" src="https://github.com/YupengSu/LLM-Barber/raw/main/img/figure1a.png"> | Github <br> Paper |
Enhancing One-shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism <br> Guanchen Li, Xiandong Zhao, Lian Liu, Zeping Li, Dong Li, Lu Tian, Jie He, Ashish Sirasao, Emad Barsoum | <img width="1002" alt="image" src="https://arxiv.org/html/2408.10473v1/x1.png"> | Paper |
Knowledge Distillation
Title & Authors | Introduction | Links |
---|---|---|
:star: Knowledge Distillation of Large Language Models <br> Yuxian Gu, Li Dong, Furu Wei, Minlie Huang | <img width="1002" alt="image" src="https://github.com/microsoft/LMOps/blob/main/minillm/figures/method.png"> | Github <br> Paper |
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models <br> Jahyun Koo, Yerin Hwang, Yongil Kim, Taegwan Kang, Hyunkyung Bae, Kyomin Jung | <img width="1002" alt="image" src="figures/switch.png"> | Paper |
<br>Beyond Autoregression: Fast LLMs via Self-Distillation Through Time <br> Justin Deschenaux, Caglar Gulcehre | <img width="1002" alt="image" src="https://arxiv.org/html/2410.21035v1/x3.png"> | Github <br> Paper |
Pre-training Distillation for Large Language Models: A Design Space Exploration <br> Hao Peng, Xin Lv, Yushi Bai, Zijun Yao, Jiajie Zhang, Lei Hou, Juanzi Li | Paper | |
<br>MiniPLM: Knowledge Distillation for Pre-Training Language Models <br> Yuxian Gu, Hao Zhou, Fandong Meng, Jie Zhou, Minlie Huang | <img width="1002" alt="image" src="https://github.com/thu-coai/MiniPLM/raw/main/figures/method.png"> | Github <br> Paper |
Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling <br> Wenda Xu, Rujun Han, Zifeng Wang, Long T. Le, Dhruv Madeka, Lei Li, William Yang Wang, Rishabh Agarwal, Chen-Yu Lee, Tomas Pfister | <img width="1002" alt="image" src="https://arxiv.org/html/2410.11325v1/x2.png"> | Paper |
Evolutionary Contrastive Distillation for Language Model Alignment <br> Julian Katz-Samuels, Zheng Li, Hyokun Yun, Priyanka Nigam, Yi Xu, Vaclav Petricek, Bing Yin, Trishul Chilimbi | <img width="1002" alt="image" src="https://arxiv.org/html/2410.07513v1/extracted/5913898/figures/main_alg_v3.png"> | Paper |
BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data <br> Jean-Loup Tastet, Inar Timiryasov | Paper | |
EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models <br> Hossein Rajabzadeh, Aref Jafari, Aman Sharma, Benyamin Jami, Hyock Ju Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh | <img width="1002" alt="image" src="https://arxiv.org/html/2409.14595v1/extracted/5869635/Figs/shared_attention_diagram.png"> | Paper |
<br>SKIntern: Internalizing Symbolic Knowledge for Distilling Better CoT Capabilities into Small Language Models <br> Huanxuan Liao, Shizhu He, Yupu Hao, Xiang Li, Yuanzhe Zhang, Kang Liu, Jun Zhao | <img width="1002" alt="image" src="https://arxiv.org/html/2409.13183v1/x1.png"> | Github <br> Paper |
<br>LLMR: Knowledge Distillation with a Large Language Model-Induced Reward <br> Dongheng Li, Yongchang Hao, Lili Mou | <img width="1002" alt="image" src="https://github.com/MANGA-UOFA/Prompt-LLMR/blob/main/LLMR-main/assets/model.png"> | Github <br> Paper |
Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models <br> Jun Rao, Xuebo Liu, Zepeng Lin, Liang Ding, Jing Li, Dacheng Tao | <img width="1002" alt="image" src="https://arxiv.org/html/2409.12512v1/x1.png"> | Paper |
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights <br> Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger | <img width="1002" alt="image" src="https://arxiv.org/html/2409.12586v1/x2.png"> | Paper |
<br>The Mamba in the Llama: Distilling and Accelerating Hybrid Models <br> Junxiong Wang, Daniele Paliotta, Avner May, Alexander M. Rush, Tri Dao | <img width="1002" alt="image" src="https://arxiv.org/html/2408.15237v1/x1.png"> | Github <br> Paper |
FIRST: Teach A Reliable Large Language Model Through Efficient Trustworthy Distillation <br> KaShun Shum, Minrui Xu, Jianshu Zhang, Zixin Chen, Shizhe Diao, Hanze Dong, Jipeng Zhang, Muhammad Omer Raza | <img width="1002" alt="image" src="https://arxiv.org/html/2408.12168v1/extracted/5806746/Figures/trustworthy.png"> | Paper |
Interactive DualChecker for Mitigating Hallucinations in Distilling Large Language Models <br> Meiyun Wang, Masahiro Suzuki, Hiroki Sakaji, Kiyoshi Izumi | <img width="1002" alt="image" src="https://arxiv.org/html/2408.12326v1/extracted/5806761/figs/intro.jpg"> | Paper |
Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models <br> Aviv Bick, Kevin Y. Li, Eric P. Xing, J. Zico Kolter, Albert Gu | <img width="1002" alt="image" src="https://arxiv.org/html/2408.10189v1/x1.png"> | Paper |
Concept Distillation from Strong to Weak Models via Hypotheses-to-Theories Prompting <br> Emmanuel Aboah Boateng, Cassiano O. Becker, Nabiha Asghar, Kabir Walia, Ashwin Srinivasan, Ehi Nosakhare, Victor Dibia, Soundar Srinivasan | <img width="1002" alt="image" src="https://arxiv.org/html/2408.09365v1/x2.png"> | Paper |
LaDiMo: Layer-wise Distillation Inspired MoEfier <br> Sungyoon Kim, Youngjun Kim, Kihyo Moon, Minsung Jang | <img width="1002" alt="image" src="https://arxiv.org/html/2408.04278v1/extracted/5780689/figures/moefier.png"> | Paper |
Quantization
Title & Authors | Introduction | Links |
---|---|---|
<br> :star: GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers <br> Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh | <img width="202" alt="image" src="figures/GPTQ.png"> | Github <br> Paper |
<br> :star: SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models <br> Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, Song Han | <img width="1002" alt="image" src="https://github.com/mit-han-lab/smoothquant/blob/main/figures/intuition.png"> | Github <br> Paper |
<br> :star: AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration <br> Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, Song Han | <img width="1002" alt="image" src="https://github.com/mit-han-lab/llm-awq/blob/main/figures/overview.png"> | Github <br> Paper |
<br> :star: OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models <br> Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, Ping Luo | <img width="1002" alt="image" src="figures/omniquant.png"> | Github <br> Paper |
"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization <br> Eldar Kurtic, Alexandre Marques, Shubhra Pandit, Mark Kurtz, Dan Alistarh | Paper | |
GWQ: Gradient-Aware Weight Quantization for Large Language Models <br> Yihua Shao, Siyu Liang, Xiaolin Lin, Zijian Ling, Zixian Zhu et al | <img width="1002" alt="image" src="https://arxiv.org/html/2411.00850v1/x2.png"> | Paper |
A Comprehensive Study on Quantization Techniques for Large Language Models <br> Jiedong Lang, Zhehao Guo, Shuyu Huang | Paper | |
BitNet a4.8: 4-bit Activations for 1-bit LLMs <br> Hongyu Wang, Shuming Ma, Furu Wei | <img width="1002" alt="image" src="https://arxiv.org/html/2411.04965v1/x1.png"> | Paper |
<br>TesseraQ: Ultra Low-Bit LLM Post-Training Quantization with Block Reconstruction <br> Yuhang Li, Priyadarshini Panda | <img width="1002" alt="image" src="https://github.com/Intelligent-Computing-Lab-Yale/TesseraQ/raw/main/imgs/tesseraq.png"> | Github <br> Paper |
<br>BitStack: Fine-Grained Size Control for Compressed Large Language Models in Variable Memory Environments <br> Xinghao Wang, Pengyu Wang, Bo Wang, Dong Zhang, Yunhua Zhou, Xipeng Qiu | <img width="1002" alt="image" src="https://github.com/xinghaow99/BitStack/raw/main/assets/bitstack.png"> | Github <br> Paper |
The Impact of Inference Acceleration Strategies on Bias of LLMs <br> Elisabeth Kirsten, Ivan Habernal, Vedant Nanda, Muhammad Bilal Zafar | Paper | |
Understanding the difficulty of low-precision post-training quantization of large language models <br> Zifei Xu, Sayeh Sharify, Wanzin Yazar, Tristan Webb, Xin Wang | <img width="1002" alt="image" src="https://arxiv.org/html/2410.14570v1/extracted/5935973/figures/fig1.png"> | Paper |
<br>1-bit AI Infra: Part 1.1, Fast and Lossless BitNet b1.58 Inference on CPUs <br> Jinheng Wang, Hansong Zhou, Ting Song, Shaoguang Mao, Shuming Ma, Hongyu Wang, Yan Xia, Furu Wei | <img width="1002" alt="image" src="https://arxiv.org/html/2410.16144v2/x1.png"> | Github <br> Paper |
QuAILoRA: Quantization-Aware Initialization for LoRA <br> Neal Lawton, Aishwarya Padmakumar, Judith Gaspers, Jack FitzGerald, Anoop Kumar, Greg Ver Steeg, Aram Galstyan | Paper | |
Evaluating Quantized Large Language Models for Code Generation on Low-Resource Language Benchmarks <br> Enkhbold Nyamsuren | Paper | |
<br> :star: SqueezeLLM: Dense-and-Sparse Quantization <br>Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W. Mahoney, Kurt Keutzer | <img width="1102" alt="image" src="figures/SqueezeLLM.png"> | Github <br> Paper |
Pyramid Vector Quantization for LLMs <br> Tycho F. A. van der Ouderaa, Maximilian L. Croci, Agrin Hilmkil, James Hensman | <img width="1002" alt="image" src="https://arxiv.org/html/2410.16926v1/x1.png"> | Paper |
SeedLM: Compressing LLM Weights into Seeds of Pseudo-Random Generators <br> Rasoul Shafipour, David Harrison, Maxwell Horton, Jeffrey Marker, Houman Bedayat, Sachin Mehta, Mohammad Rastegari, Mahyar Najibi, Saman Naderiparizi | <img width="1002" alt="image" src="https://arxiv.org/html/2410.10714v2/x1.png"> | Paper |
<br>FlatQuant: Flatness Matters for LLM Quantization <br> Yuxuan Sun, Ruikang Liu, Haoli Bai, Han Bao, Kang Zhao, Yuening Li, Jiaxin Hu, Xianzhi Yu, Lu Hou, Chun Yuan, Xin Jiang, Wulong Liu, Jun Yao | <img width="1002" alt="image" src="https://arxiv.org/html/2410.09426v1/x11.png"> | Github <br> Paper |
<br>SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs <br> Mohammad Mozaffari, Maryam Mehri Dehnavi | <img width="1002" alt="image" src="https://arxiv.org/html/2410.09615v1/x1.png"> | Github <br> Paper |
Scaling laws for post-training quantized large language models <br> Zifei Xu, Alexander Lan, Wanzin Yazar, Tristan Webb, Sayeh Sharify, Xin Wang | <img width="202" alt="image" src="https://arxiv.org/html/2410.12119v1/extracted/5929616/figures/fig_12.png"> | Paper |
Continuous Approximations for Improving Quantization Aware Training of LLMs <br> He Li, Jianhang Hong, Yuanzhuo Wu, Snehal Adbol, Zonglin Li | Paper | |
<br>DAQ: Density-Aware Post-Training Weight-Only Quantization For LLMs <br> Yingsong Luo, Ling Chen | <img width="1002" alt="image" src="https://arxiv.org/html/2410.12187v2/x1.png"> | Github <br> Paper |
<br>Quamba: A Post-Training Quantization Recipe for Selective State Space Models <br> Hung-Yueh Chiang, Chi-Chih Chang, Natalia Frumkin, Kai-Chiang Wu, Diana Marculescu | <img width="1002" alt="image" src="https://arxiv.org/html/2410.13229v1/extracted/5933363/figures/outliers.png"> | Github <br> Paper |
AsymKV: Enabling 1-Bit Quantization of KV Cache with Layer-Wise Asymmetric Quantization Configurations <br> Qian Tao, Wenyuan Yu, Jingren Zhou | <img width="1002" alt="image" src="https://arxiv.org/html/2410.13212v1/extracted/5933292/figures/kvmix.png"> | Paper |
Channel-Wise Mixed-Precision Quantization for Large Language Models <br> Zihan Chen, Bike Xie, Jundong Li, Cong Shen | <img width="1002" alt="image" src="https://arxiv.org/html/2410.13056v1/x1.png"> | Paper |
Progressive Mixed-Precision Decoding for Efficient LLM Inference <br> Hao Mark Chen, Fuwen Tan, Alexandros Kouris, Royson Lee, Hongxiang Fan, Stylianos I. Venieris | <img width="1002" alt="image" src="https://arxiv.org/html/2410.13461v1/x4.png"> | Paper |
<br>EXAQ: Exponent Aware Quantization For LLMs Acceleration <br> Moran Shkolnik, Maxim Fishman, Brian Chmiel, Hilla Ben-Yaacov, Ron Banner, Kfir Yehuda Levy | <img width="1002" alt="image" src="figures/EXAQ.png"> | Github <br> Paper |
<br>PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs <br> Mengzhao Chen, Yi Liu, Jiahao Wang, Yi Bin, Wenqi Shao, Ping Luo | <img width="1002" alt="image" src="https://arxiv.org/html/2410.05265v1/x1.png"> | Github <br> Paper |
<br> :star: Extreme Compression of Large Language Models via Additive Quantization <br> Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh | <img width="1002" alt="image" src="figures/MCQ.png"> | Github <br> Paper |
Scaling Laws for Mixed quantization in Large Language Models <br> Zeyu Cao, Cheng Zhang, Pedro Gimenes, Jianqiao Lu, Jianyi Cheng, Yiren Zhao | <img width="1002" alt="image" src="figures/LLM-MPQ.png"> | Paper |
PalmBench: A Comprehensive Benchmark of Compressed Large Language Models on Mobile Platforms <br> Yilong Li, Jingyu Liu, Hao Zhang, M Badri Narayanan, Utkarsh Sharma, Shuai Zhang, Pan Hu, Yijing Zeng, Jayaram Raghuram, Suman Banerjee | <img width="1002" alt="image" src="figures/PalmBench.png"> | Paper |
CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression <br> Wenyuan Liu, Xindian Ma, Peng Zhang, Yan Wang | <img width="1002" alt="image" src="https://arxiv.org/html/2410.07505v1/x1.png"> | Paper |
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration <br> Jintao Zhang, Jia wei, Pengle Zhang, Jun Zhu, Jianfei Chen | <img width="1002" alt="image" src="https://arxiv.org/html/2410.02367v1/x5.png"> | Paper |
Addition is All You Need for Energy-efficient Language Models <br> Hongyin Luo, Wei Sun | <img width="1002" alt="image" src="https://arxiv.org/html/2410.00907v1/x2.png"> | Paper |
<br>VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models <br> Yifei Liu, Jicheng Wen, Yang Wang, Shengyu Ye, Li Lyna Zhang, Ting Cao, Cheng Li, Mao Yang | <img width="1002" alt="image" src="figures/VPTQ.png"> | Github <br> Paper |
<br>INT-FlashAttention: Enabling Flash Attention for INT8 Quantization <br> Shimao Chen, Zirui Liu, Zhiying Wu, Ce Zheng, Peizhuang Cong, Zihan Jiang, Yuhan Wu, Lei Su, Tong Yang | <img width="1002" alt="image" src="https://arxiv.org/html/2409.16997v2/x1.png"> | Github <br> Paper |
Accumulator-Aware Post-Training Quantization <br> Ian Colbert, Fabian Grob, Giuseppe Franco, Jinjie Zhang, Rayan Saab | <img width="1002" alt="image" src="https://arxiv.org/html/2409.17092v1/x2.png"> | Paper |
<br>DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs <br> Haokun Lin, Haobo Xu, Yichen Wu, Jingzhi Cui, Yingtao Zhang, Linzhan Mou, Linqi Song, Zhenan Sun, Ying Wei | <img width="1002" alt="image" src="https://github.com/Hsu1023/DuQuant/blob/main/imgs/duquant.png"> | Github <br> Paper |
A Comprehensive Evaluation of Quantized Instruction-Tuned Large Language Models: An Experimental Analysis up to 405B <br> Jemin Lee, Sihyeong Park, Jinse Kwon, Jihun Oh, Yongin Kwon | <img width="1002" alt="image" src="https://arxiv.org/html/2409.11055v1/x1.png"> | Paper |
The Uniqueness of LLaMA3-70B with Per-Channel Quantization: An Empirical Study <br> Minghai Qin | <img width="1002" alt="image" src="https://arxiv.org/html/2408.15301v1/extracted/5797059/LaTeX/figures/llama3-70b-series-accuracy.png"> | Paper |
Matmul or No Matmal in the Era of 1-bit LLMs <br> Jinendra Malekar, Mohammed E. Elbtity, Ramtin Zand Co | <img width="1002" alt="image" src="https://arxiv.org/html/2408.11939v1/extracted/5805924/figures/matmul.png"> | Paper |
<br>MobileQuant: Mobile-friendly Quantization for On-device Language Models <br> Fuwen Tan, Royson Lee, Łukasz Dudziak, Shell Xu Hu, Sourav Bhattacharya, Timothy Hospedales, Georgios Tzimiropoulos, Brais Martinez | <img width="1002" alt="image" src="https://arxiv.org/html/2408.13933v1/x1.png"> | Github <br> Paper |
<br>ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models <br> Chao Zeng, Songwei Liu, Yusheng Xie, Hong Liu, Xiaojian Wang, Miao Wei, Shu Yang, Fangmin Chen, Xing Mei | <img width="1002" alt="image" src="figures/abq-llm.png"> | Github <br> Paper |
STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs <br> Peijie Dong, Lujun Li, Dayou Du, Yuhan Chen, Zhenheng Tang, Qiang Wang, Wei Xue, Wenhan Luo, Qifeng Liu, Yike Guo, Xiaowen Chu | <img width="1002" alt="image" src="https://arxiv.org/html/2408.01803v1/extracted/5772020/pic/basic_block.png"> | Paper |
Inference Acceleration
Title & Authors | Introduction | Links |
---|---|---|
<br> :star: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time <br> Zichang Liu, Jue WANG, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, Beidi Chen | <img width="202" alt="image" src="figures/DajeVu.png"> | Github <br> Paper |
<br> :star: SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification <br> Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Rae Ying Yee Wong, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, Zhihao Jia | <img width="600" alt="image" src="https://github.com/flexflow/FlexFlow/blob/inference/img/overview.png"> | Github <br> paper |
<br> :star: Efficient Streaming Language Models with Attention Sinks <br> Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis | <img width="1002" alt="image" src="https://github.com/mit-han-lab/streaming-llm/blob/main/figures/schemes.png"> | Github <br> Paper |
<br>:star: EAGLE: Lossless Acceleration of LLM Decoding by Feature Extrapolation <br> Yuhui Li, Chao Zhang, and Hongyang Zhang | <img width="302" alt="image" src="https://github.com/SafeAILab/EAGLE/blob/main/figs/fig1.png"> | Github <br> Blog |
<br> :star: Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads <br> Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao | <img width="1002" alt="image" src="https://arxiv.org/html/2401.10774v1/x1.png"> | Github <br> Paper |
<br>SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents <br> Dawei Li, Zhen Tan, Peijia Qian, Yifan Li, Kumar Satvik Chaudhary, Lijie Hu, Jiayi Shen | <img width="1002" alt="image" src="figures/SMoA.png"> | Github <br> Paper |
The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation <br> Lawrence Stewart, Matthew Trager, Sujan Kumar Gonugondla, Stefano Soatto | Paper | |
Accelerated AI Inference via Dynamic Execution Methods <br> Haim Barad, Jascha Achterberg, Tien Pei Chou, Jean Yu | Paper | |
SuffixDecoding: A Model-Free Approach to Speeding Up Large Language Model Inference <br> Gabriele Oliaro, Zhihao Jia, Daniel Campos, Aurick Qiao | <img width="1002" alt="image" src="https://arxiv.org/html/2411.04975v1/x1.png"> | Paper |
Dynamic Strategy Planning for Efficient Question Answering with Large Language Models <br> Tanmay Parekh, Pradyot Prakash, Alexander Radovic, Akshay Shekher, Denis Savenkov | <img width="1002" alt="image" src="https://arxiv.org/html/2410.23511v1/x1.png"> | Paper |
<br>MagicPIG: LSH Sampling for Efficient LLM Generation <br> Zhuoming Chen, Ranajoy Sadhukhan, Zihao Ye, Yang Zhou, Jianyu Zhang, Niklas Nolte, Yuandong Tian, Matthijs Douze, Leon Bottou, Zhihao Jia, Beidi Chen | <img width="1002" alt="image" src="https://arxiv.org/html/2410.16179v2/x15.png"> | Github <br> Paper |
Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition <br> Artem Basharin, Andrei Chertkov, Ivan Oseledets | <img width="1002" alt="image" src="figures/canonical_tensor_decomposition.png"> | Paper |
Efficient Inference for Augmented Large Language Models <br> Rana Shahout, Cong Liang, Shiji Xin, Qianru Lao, Yong Cui, Minlan Yu, Michael Mitzenmacher | <img width="1002" alt="image" src="https://arxiv.org/html/2410.18248v1/extracted/5949546/figures/illustrations/api_example_png.png"> | Paper |
<br>Dynamic Vocabulary Pruning in Early-Exit LLMs <br> Jort Vincenti, Karim Abdel Sadek, Joan Velja, Matteo Nulli, Metod Jazbec | <img width="1002" alt="image" src="https://github.com/MatteoNulli/Vocabulary_pruning/raw/main/src/images/final_nips.svg"> | Github <br> Paper |
<br>CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Activation <br> Qinsi Wang, Saeed Vahidian, Hancheng Ye, Jianyang Gu, Jianyi Zhang, Yiran Chen | <img width="1002" alt="image" src="https://wangqinsi1.github.io/coreinfer_page/static/images/overview.png"> | Github <br> Paper |
<br>DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads <br> Guangxuan Xiao, Jiaming Tang, Jingwei Zuo, Junxian Guo, Shang Yang, Haotian Tang, Yao Fu, Song Han | <img width="1002" alt="image" src="https://github.com/mit-han-lab/duo-attention/raw/main/figures/method1.jpg"> | Github <br> Paper |
DySpec: Faster Speculative Decoding with Dynamic Token Tree Structure <br> Yunfan Xiong, Ruoyu Zhang, Yanzeng Li, Tianhao Wu, Lei Zou | <img width="1002" alt="image" src="https://arxiv.org/html/2410.11744v1/extracted/5913908/figures/tree_bold.png"> | Paper |
QSpec: Speculative Decoding with Complementary Quantization Schemes <br> Juntao Zhao, Wenhao Lu, Sheng Wang, Lingpeng Kong, Chuan Wu | <img width="1002" alt="image" src="https://arxiv.org/html/2410.11305v1/x1.png"> | Paper |
TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention <br> Lijie Yang, Zhihao Zhang, Zhuofu Chen, Zikun Li, Zhihao Jia | <img width="1002" alt="image" src="https://arxiv.org/html/2410.05076v1/x2.png"> | Paper |
ParallelSpec: Parallel Drafter for Efficient Speculative Decoding <br> Zilin Xiao, Hongming Zhang, Tao Ge, Siru Ouyang, Vicente Ordonez, Dong Yu | <img width="1002" alt="image" src="https://arxiv.org/html/2410.05589v1/x1.png"> | Paper |
<br>SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration <br> Heming Xia, Yongqi Li, Jun Zhang, Cunxiao Du, Wenjie Li | <img width="1002" alt="image" src="https://github.com/hemingkx/SWIFT/raw/main/assets/swift.png"> | Github <br> Paper |
<br>TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text <br> Songshuo Lu, Hua Wang, Yutian Rong, Zhi Chen, Yaohua Tang | <img width="1002" alt="image" src="https://github.com/MooreThreads/TurboRAG/raw/main/assets/image/TurboRAG.png"> | Github <br> Paper |
A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts <br> Suyu Ge, Xihui Lin, Yunan Zhang, Jiawei Han, Hao Peng | <img width="1002" alt="image" src="https://arxiv.org/html/2410.01485v1/extracted/5895696/figures/model_architecture.png"> | Paper |
Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations <br> Amey Agrawal, Junda Chen, Íñigo Goiri, Ramachandran Ramjee, Chaojie Zhang, Alexey Tumanov, Esha Choukse | <img width="1002" alt="image" src="https://arxiv.org/html/2409.17264v1/x14.png"> | Paper |
<br>Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction <br> Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, Yingyu Liang, Shafiq Joty | <img width="1002" alt="image" src="https://arxiv.org/html/2409.17422v1/x1.png"> | Github <br> Paper |
Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference <br> Zongyue Qin, Zifan He, Neha Prakriya, Jason Cong, Yizhou Sun | <img width="1002" alt="image" src="https://arxiv.org/html/2409.16560v1/x6.png"> | Paper |
<br>CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs <br> Junlin Lv, Yuan Feng, Xike Xie, Xin Jia, Qirong Peng, Guiming Xie | <img width="1002" alt="image" src="https://arxiv.org/html/2409.12490v1/x2.png"> | Github <br> Paper |
RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval <br> Di Liu, Meng Chen, Baotong Lu, Huiqiang Jiang, Zhenhua Han, Qianxi Zhang, Qi Chen, Chengruidong Zhang, Bailu Ding, Kai Zhang, Chen Chen, Fan Yang, Yuqing Yang, Lili Qiu | <img width="1002" alt="image" src="https://arxiv.org/html/2409.10516v2/x4.png"> | Paper |
<br>Sirius: Contextual Sparsity with Correction for Efficient LLMs <br> Yang Zhou, Zhuoming Chen, Zhaozhuo Xu, Victoria Lin, Beidi Chen | <img width="1002" alt="image" src="https://infini-ai-lab.github.io/Sirius/static/images/methodsillustration.png"> | Github <br> Paper |
<br>OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs <br> Jintian Zhang, Cheng Peng, Mengshu Sun, Xiang Chen, Lei Liang, Zhiqiang Zhang, Jun Zhou, Huajun Chen, Ningyu Zhang | <img width="1002" alt="image" src="https://github.com/zjunlp/OneGen/blob/main/assets/train.jpg"> | Github <br> Paper |
Path-Consistency: Prefix Enhancement for Efficient Inference in LLM <br> Jiace Zhu, Yingtao Shen, Jie Zhao, An Zou | <img width="1002" alt="image" src="https://arxiv.org/html/2409.01281v1/x1.png"> | Paper |
Boosting Lossless Speculative Decoding via Feature Sampling and Partial Alignment Distillation <br> Lujun Gui, Bin Xiao, Lei Su, Weipeng Chen | <img width="1002" alt="image" src="https://arxiv.org/html/2408.15562v1/extracted/5818109/structure_0.png"> | Paper |
Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling <br> Xianzhen Luo, Yixuan Wang, Qingfu Zhu, Zhiming Zhang, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che | <img width="202" alt="image" src="https://arxiv.org/html/2408.08696v1/x1.png"> | Paper |
Speculative Diffusion Decoding: Accelerating Language Generation through Diffusion <br> Jacob K Christopher, Brian R Bartoldson, Bhavya Kailkhura, Ferdinando Fioretto | <img width="1002" alt="image" src="https://arxiv.org/html/2408.05636v1/x1.png"> | Paper |
Efficient MOE
Title & Authors | Introduction | Links |
---|---|---|
<br>:star: Fast Inference of Mixture-of-Experts Language Models with Offloading <br> Artyom Eliseev, Denis Mazur | <img width="1002" alt="image" src="figures/mixtral_offloading.png"> | Github <br> Paper |
<br>MoNTA: Accelerating Mixture-of-Experts Training with Network-Traffc-Aware Parallel Optimization <br> Jingming Guo, Yan Liu, Yu Meng, Zhiwei Tao, Banglan Liu, Gang Chen, Xiang Li | <img width="1002" alt="image" src="https://arxiv.org/html/2411.00662v1/x1.png"> | Github <br> Paper |
<br>MoE-I2: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition <br> Cheng Yang, Yang Sui, Jinqi Xiao, Lingyi Huang, Yu Gong, Yuanlin Duan, Wenqi Jia, Miao Yin, Yu Cheng, Bo Yuan | <img width="1002" alt="image" src="https://arxiv.org/html/2411.01016v1/x1.png"> | Github <br> Paper |
HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference <br> Peng Tang, Jiacheng Liu, Xiaofeng Hou, Yifei Pu, Jing Wang, Pheng-Ann Heng, Chao Li, Minyi Guo | <img width="1002" alt="image" src="https://arxiv.org/html/2411.01433v2/extracted/5980843/figures/overview5.png"> | Paper |
ProMoE: Fast MoE-based LLM Serving using Proactive Caching <br> Xiaoniu Song, Zihang Zhong, Rong Chen | <img width="1002" alt="image" src="https://arxiv.org/html/2410.22134v1/x1.png"> | Paper |
ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference <br> Xin He, Shunkang Zhang, Yuxin Wang, Haiyan Yin, Zihao Zeng, Shaohuai Shi, Zhenheng Tang, Xiaowen Chu, Ivor Tsang, Ong Yew Soon | <img width="202" alt="image" src="https://arxiv.org/html/2410.17954v1/x1.png"> | Paper |
EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference <br> Yulei Qian, Fengcun Li, Xiangyang Ji, Xiaoyu Zhao, Jianchao Tan, Kefeng Zhang, Xunliang Cai | Paper | |
<br>MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More <br> Wei Huang, Yue Liao, Jianhui Liu, Ruifei He, Haoru Tan, Shiming Zhang, Hongsheng Li, Si Liu, Xiaojuan Qi | <img width="1002" alt="image" src="https://github.com/Aaronhuang-778/MC-MoE/raw/main/imgs/WX20241009-191322@2x.png"> | Github <br> Paper |
Efficient Architecture of LLM
Title & Authors | Introduction | Links |
---|---|---|
<br>:star: MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT <br> Omkar Thawakar, Ashmal Vayani, Salman Khan, Hisham Cholakal, Rao M. Anwer, Michael Felsberg, Tim Baldwin, Eric P. Xing, Fahad Shahbaz Khan | <img width="402" alt="image" src="https://github.com/mbzuai-oryx/MobiLlama/raw/main/images/mobillama_generation.gif"> | Github <br> Paper <br>Model |
<br>:star: Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length <br> Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou | <img width="1002" alt="image" src="figures/megalodon.png"> | Github <br> Paper |
Taipan: Efficient and Expressive State Space Language Models with Selective Attention <br> Chien Van Nguyen, Huy Huu Nguyen, Thang M. Pham, Ruiyi Zhang, Hanieh Deilamsalehy, Puneet Mathur, Ryan A. Rossi, Trung Bui, Viet Dac Lai, Franck Dernoncourt, Thien Huu Nguyen | <img width="1002" alt="image" src="https://arxiv.org/html/2410.18572v1/x2.png"> | Paper |
<br>SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs <br> Yizhao Gao, Zhichen Zeng, Dayou Du, Shijie Cao, Hayden Kwok-Hay So, Ting Cao, Fan Yang, Mao Yang | <img width="202" alt="image" src="https://arxiv.org/html/2410.13276v1/x4.png"> | Github <br> Paper |
<br>Basis Sharing: Cross-Layer Parameter Sharing for Large Language Model Compression <br> Jingcun Wang, Yu-Guang Chen, Ing-Chao Lin, Bing Li, Grace Li Zhang | <img width="1002" alt="image" src="https://arxiv.org/html/2410.03765v1/x1.png"> | Github <br> Paper |
Rodimus*: Breaking the Accuracy-Efficiency Trade-Off with Efficient Attentions <br> Zhihao He, Hang Yu, Zi Gong, Shizhan Liu, Jianguo Li, Weiyao Lin | <img width="1002" alt="image" src="https://arxiv.org/html/2410.06577v1/x3.png"> | Paper |
KV Cache Compression
Title & Authors | Introduction | Links |
---|---|---|
:star: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs <br> Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao | <img width="1002" alt="image" src="figures/FastGen.png"> | Paper |
TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection <br> Wei Wu, Zhuoshi Pan, Chao Wang, Liyi Chen, Yunchu Bai, Kun Fu, Zheng Wang, Hui Xiong | <img width="1002" alt="image" src="https://arxiv.org/html/2411.02886v1/x1.png"> | Paper |
<br>Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning <br> Yu Fu, Zefan Cai, Abedelkadir Asi, Wayne Xiong, Yue Dong, Wen Xiao | <img width="1002" alt="image" src="https://github.com/FYYFU/HeadKV/raw/main/main.png"> | Github <br> Paper |
<br>BUZZ: Beehive-structured Sparse KV Cache with Segmented Heavy Hitters for Efficient LLM Inference <br> Junqi Zhao, Zhijin Fang, Shu Li, Shaohui Yang, Shichao He | <img width="1002" alt="image" src="https://arxiv.org/html/2410.23079v1/x1.png"> | Github <br> Paper |
<br>A Systematic Study of Cross-Layer KV Sharing for Efficient LLM Inference <br> You Wu, Haoyi Wu, Kewei Tu | <img width="202" alt="image" src="figures/cross-layer-kv.png"> | Github <br> Paper |
Lossless KV Cache Compression to 2% <br> Zhen Yang, J.N.Han, Kan Wu, Ruobing Xie, An Wang, Xingwu Sun, Zhanhui Kang | <img width="1002" alt="image" src="https://arxiv.org/html/2410.15252v1/extracted/5937225/images/CLLA_Overview.png"> | Paper |
MatryoshkaKV: Adaptive KV Compression via Trainable Orthogonal Projection <br> Bokai Lin, Zihao Zeng, Zipeng Xiao, Siqi Kou, Tianqi Hou, Xiaofeng Gao, Hao Zhang, Zhijie Deng | <img width="1002" alt="image" src="https://arxiv.org/html/2410.14731v1/x2.png"> | Paper |
<br>Residual vector quantization for KV cache compression in large language model <br> Ankur Kumar | Github <br> Paper | |
<br>KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing <br> Yifei Yang, Zouying Cao, Qiguang Chen, Libo Qin, Dongjie Yang, Hai Zhao, Zhi Chen | <img width="1002" alt="image" src="https://github.com/yangyifei729/KVSharer/raw/main/img/main_fig.jpg"> | Github <br> Paper |
LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy <br> Rongzhi Zhang, Kuang Wang, Liyuan Liu, Shuohang Wang, Hao Cheng, Chao Zhang, Yelong Shen | <img width="1002" alt="image" src="figures/LoRC.png"> | Paper |
SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation <br> Aurick Qiao, Zhewei Yao, Samyam Rajbhandari, Yuxiong He | <img width="1002" alt="image" src="https://arxiv.org/html/2410.03960v1/x1.png"> | Paper |
<br>Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference <br> Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski, David Tarjan, Edoardo M. Ponti | <img width="1002" alt="image" src="figures/DMC.png"> | Paper |
KV-Compress: Paged KV-Cache Compression with Variable Compression Rates per Attention Head <br> Isaac Rehg | <img width="1002" alt="image" src="https://arxiv.org/html/2410.00161v1/x5.png"> | Paper |
<br>Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference <br> Yuan Feng, Junlin Lv, Yukun Cao, Xike Xie, S. Kevin Zhou | <img width="1002" alt="image" src="figures/adakv.png"> | Github <br> Paper |
<br>AlignedKV: Reducing Memory Access of KV-Cache with Precision-Aligned Quantization <br> Yifan Tan, Haoze Wang, Chao Yan, Yangdong Deng | <img width="1002" alt="image" src="https://arxiv.org/html/2409.16546v1/extracted/5867591/Figure6.png"> | Github <br> Paper |
CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios <br> Luning Wang, Shiyao Li, Xuefei Ning, Zhihang Yuan, Shengen Yan, Guohao Dai, Yu Wang | <img width="1002" alt="image" src="https://arxiv.org/html/2409.10593v1/x1.png"> | Paper |
A First Look At Efficient And Secure On-Device LLM Inference Against KV Leakage <br> Huan Yang, Deyu Zhang, Yudong Zhao, Yuanchun Li, Yunxin Liu | <img width="1002" alt="image" src="https://arxiv.org/html/2409.04040v1/x3.png"> | Paper |
<br>Post-Training Sparse Attention with Double Sparsity <br> Shuo Yang, Ying Sheng, Joseph E. Gonzalez, Ion Stoica, Lianmin Zheng | <img width="302" alt="image" src="https://github.com/andy-yang-1/DoubleSparse/raw/main/assets/double-sparsity-gif-v2.gif"> | Github <br> Paper |
<br>Eigen Attention: Attention in Low-Rank Space for KV Cache Compression <br> Utkarsh Saxena, Gobinda Saha, Sakshi Choudhary, Kaushik Roy | <img width="1002" alt="image" src="https://arxiv.org/html/2408.05646v1/x1.png"> | Github <br> Paper |
Zero-Delay QKV Compression for Mitigating KV Cache and Network Bottlenecks in LLM Inference <br> Zeyu Zhang,Haiying Shen | <img width="1002" alt="image" src="https://arxiv.org/html/2408.04107v1/x15.png"> | Paper |
Finch: Prompt-guided Key-Value Cache Compression <br> Giulio Corallo, Paolo Papotti | <img width="1002" alt="image" src="https://arxiv.org/html/2408.00167v1/extracted/5763688/assets/diagram_finch.png"> | Paper |
<br>Palu: Compressing KV-Cache with Low-Rank Projection <br> Chi-Chih Chang, Wei-Cheng Lin, Chien-Yu Lin, Chong-Yan Chen, Yu-Fang Hu, Pei-Shuo Wang, Ning-Chi Huang, Luis Ceze, Kai-Chiang Wu | <img width="1002" alt="image" src="https://github.com/shadowpa0327/Palu/blob/master/img/palu_idea.png"> | Github <br> Paper |
ThinK: Thinner Key Cache by Query-Driven Pruning <br> Yuhui Xu, Zhanming Jie, Hanze Dong, Lei Wang, Xudong Lu, Aojun Zhou, Amrita Saha, Caiming Xiong, Doyen Sahoo | <img width="1002" alt="image" src="https://arxiv.org/html/2407.21018v1/x1.png"> | Paper |
Text Compression
Title & Authors | Introduction | Links |
---|---|---|
<br>:star: LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models <br> Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu | <img width="1002" alt="image" src="https://github.com/microsoft/LLMLingua/blob/main/images/LLMLingua_framework.png"> | Github <br> Paper |
<br>:star: LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression <br> Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu | <img width="1002" alt="image" src="figures/longllmlingua.png"> | Github <br> Paper |
<br>MultiTok: Variable-Length Tokenization for Efficient LLMs Adapted from LZW Compression <br> Noel Elias, Homa Esfahanizadeh, Kaan Kale, Sriram Vishwanath, Muriel Medard | <img width="1002" alt="image" src="https://arxiv.org/html/2410.21548v1/extracted/5960495/Figures/MultiTok.png"> | Github <br> Paper |
<br>Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability <br> Tsz Ting Chung, Leyang Cui, Lemao Liu, Xinting Huang, Shuming Shi, Dit-Yan Yeung | <img width="202" alt="image" src="https://arxiv.org/html/2410.11786v1/x1.png"> | Paper |
<br>From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression <br> Eunseong Choi, Sunkyung Lee, Minjin Choi, June Park, Jongwuk Lee | <img width="1002" alt="image" src="https://arxiv.org/html/2410.04139v1/extracted/5902409/Figures/fig_R2C_framework_2col_v4.png"> | Paper |
Perception Compressor:A training-free prompt compression method in long context scenarios <br> Jiwei Tang, Jin Xu, Tingwei Lu, Hai Lin, Yiming Zhao, Hai-Tao Zheng | <img width="1002" alt="image" src="https://arxiv.org/html/2409.19272v1/x1.png"> | Paper |
<br>FineZip : Pushing the Limits of Large Language Models for Practical Lossless Text Compression <br> Fazal Mittu, Yihuan Bu, Akshat Gupta, Ashok Devireddy, Alp Eren Ozdarendeli, Anant Singh, Gopala Anumanchipalli | <img width="1002" alt="image" src="https://arxiv.org/html/2409.17141v1/extracted/5879840/finezip_diagram.png"> | Github <br> Paper |
<br>Parse Trees Guided LLM Prompt Compression <br> Wenhao Mao, Chengbin Hou, Tianyu Zhang, Xinyu Lin, Ke Tang, Hairong Lv | <img width="1002" alt="image" src="https://arxiv.org/html/2409.15395v1/x1.png"> | Github <br> Paper |
<br>AlphaZip: Neural Network-Enhanced Lossless Text Compression <br> Swathi Shree Narashiman, Nitin Chandrachoodan | <img width="1002" alt="image" src="https://arxiv.org/html/2409.15046v1/extracted/5873563/images/architecture_bloack_diagram.png"> | Github <br> Paper |
TACO-RL: Task Aware Prompt Compression Optimization with Reinforcement Learning <br> Shivam Shandilya, Menglin Xia, Supriyo Ghosh, Huiqiang Jiang, Jue Zhang, Qianhui Wu, Victor Rühle | <img width="1002" alt="image" src="https://arxiv.org/html/2409.13035v2/x1.png"> | Paper |
Efficient LLM Context Distillation <br> Rajesh Upadhayayaya, Zachary Smith, Chritopher Kottmyer, Manish Raj Osti | Paper | |
<br>Enhancing and Accelerating Large Language Models via Instruction-Aware Contextual Compression <br> Haowen Hou, Fei Ma, Binwen Bai, Xinxin Zhu, Fei Yu | <img width="1002" alt="image" src="https://arxiv.org/html/2408.15491v1/extracted/5817813/arch.png"> | Github <br> Paper |
Low-Rank Decomposition
Title & Authors | Introduction | Links |
---|---|---|
<br>Natural GaLore: Accelerating GaLore for memory-efficient LLM Training and Fine-tuning <br> Arijit Das | Github <br> Paper | |
CompAct: Compressed Activations for Memory-Efficient LLM Training <br> Yara Shamshoum, Nitzan Hodos, Yuval Sieradzki, Assaf Schuster | <img width="202" alt="image" src="https://arxiv.org/html/2410.15352v1/x1.png"> | Paper |
<br>ESPACE: Dimensionality Reduction of Activations for Model Compression <br> Charbel Sakr, Brucek Khailany | <img width="1002" alt="image" src="figures/ESPACE.png"> | Paper |
MoDeGPT: Modular Decomposition for Large Language Model Compression <br> Chi-Heng Lin, Shangqian Gao, James Seale Smith, Abhishek Patel, Shikhar Tuli, Yilin Shen, Hongxia Jin, Yen-Chang Hsu | <img width="1002" alt="image" src="https://arxiv.org/html/2408.09632v1/x2.png"> | Paper |
Hardware/System/Serving
Title & Authors | Introduction | Links |
---|---|---|
CE-CoLLM: Efficient and Adaptive Large Language Models Through Cloud-Edge Collaboration <br> Hongpeng Jin, Yanzhao Wu | <img width="1002" alt="image" src="https://arxiv.org/html/2411.02829v1/extracted/5978301/images/method_overview_sm.png"> | Paper |
Ripple: Accelerating LLM Inference on Smartphones with Correlation-Aware Neuron Management <br> Tuowei Wang, Ruwen Fan, Minxing Huang, Zixu Hao, Kun Li, Ting Cao, Youyou Lu, Yaoxue Zhang, Ju Ren | <img width="302" alt="image" src="https://arxiv.org/html/2410.19274v2/x7.png"> | Paper |
<br>ALISE: Accelerating Large Language Model Serving with Speculative Scheduling <br> Youpeng Zhao, Jun Wang | <img width="1002" alt="image" src="https://arxiv.org/html/2410.23537v1/extracted/5967257/imgs/b1.png"> | Paper |
EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models <br> Junhao Hu, Wenrui Huang, Haoyi Wang, Weidong Wang, Tiancheng Hu, Qin Zhang, Hao Feng, Xusheng Chen, Yizhou Shan, Tao Xie | <img width="202" alt="image" src="https://arxiv.org/html/2410.15332v1/x3.png"> | Paper |
<br>SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training <br> Jinda Jia, Cong Xie, Hanlin Lu, Daoce Wang, Hao Feng, Chengming Zhang, Baixi Sun, Haibin Lin, Zhi Zhang, Xin Liu, Dingwen Tao | <img width="1002" alt="image" src="https://arxiv.org/html/2410.15526v1/x2.png"> | Paper |
FastAttention: Extend FlashAttention2 to NPUs and Low-resource GPUs <br> Haoran Lin, Xianzhi Yu, Kang Zhao, Lu Hou, Zongyuan Zhan et al | <img width="1002" alt="image" src="https://arxiv.org/html/2410.16663v1/x2.png"> | Paper |
POD-Attention: Unlocking Full Prefill-Decode Overlap for Faster LLM Inference <br> Aditya K Kamath, Ramya Prabhu, Jayashree Mohan, Simon Peter, Ramachandran Ramjee, Ashish Panwar | <img width="1002" alt="image" src="https://arxiv.org/html/2410.18038v1/x5.png"> | Paper |
<br>TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices <br> Zonghang Li, Wenjiao Feng, Mohsen Guizani, Hongfang Yu | <img width="1002" alt="image" src="https://arxiv.org/html/2410.00531v1/x4.png"> | Github <br> Paper |
<br>Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores <br> Shaobo Ma, Chao Fang, Haikuo Shao, Zhongfeng Wang | <img width="1002" alt="image" src="https://arxiv.org/html/2409.17870v1/extracted/5882022/figures/bipolar_original2.png"> | Paper |
<br>OPAL: Outlier-Preserved Microscaling Quantization A ccelerator for Generative Large Language Models <br> Jahyun Koo, Dahoon Park, Sangwoo Jung, Jaeha Kung | <img width="1002" alt="image" src="https://arxiv.org/html/2409.05902v1/x5.png"> | Paper |
Accelerating Large Language Model Training with Hybrid GPU-based Compression <br> Lang Xu, Quentin Anthony, Qinghua Zhou, Nawras Alnaasan, Radha R. Gulhane, Aamir Shafi, Hari Subramoni, Dhabaleswar K. Panda | <img width="1002" alt="image" src="https://arxiv.org/html/2409.02423v1/extracted/5832005/Figures/mzhybrid-3d-rev.png"> | Paper |
LUT Tensor Core: Lookup Table Enables Efficient Low-Bit LLM Inference Acceleration <br> Zhiwen Mo, Lei Wang, Jianyu Wei, Zhichen Zeng, Shijie Cao, Lingxiao Ma, Naifeng Jing, Ting Cao, Jilong Xue, Fan Yang, Mao Yang | <img width="1002" alt="image" src="https://arxiv.org/html/2408.06003v1/x5.png"> | Paper |
Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference <br> Rohan Baskar Prabhakar, Hengrui Zhang, David Wentzlaff | <img width="1002" alt="image" src="https://arxiv.org/html/2408.07802v2/x2.png"> | Paper |
SLO-aware GPU Frequency Scaling for Energy Efficient LLM Inference Serving <br> Andreas Kosmas Kakolyris, Dimosthenis Masouros, Petros Vavaroutsos, Sotirios Xydis, Dimitrios Soudris | <img width="1002" alt="image" src="https://arxiv.org/html/2408.05235v1/x16.png"> | Paper |
Designing Efficient LLM Accelerators for Edge Devices <br> Jude Haris, Rappy Saha, Wenhao Hu, José Cano | <img width="1002" alt="image" src="https://arxiv.org/html/2408.00462v1/extracted/5768368/files/SECDA_meth.png"> | Paper |
Tuning
Title & Authors | Introduction | Links |
---|---|---|
<br>Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation <br> Ayan Sengupta, Vaibhav Seth, Arinjay Pathak, Natraj Raman, Sriram Gopalakrishnan, Tanmoy Chakraborty | <img width="1002" alt="image" src="https://arxiv.org/html/2411.04358v2/x3.png"> | Github <br> Paper |
<br>MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning <br> Jingfan Zhang, Yi Zhao, Dan Chen, Xing Tian, Huanran Zheng, Wei Zhu | <img width="1002" alt="image" src="https://arxiv.org/html/2410.18035v1/extracted/5949512/em_lora_framework.png"> | Paper |
<br>RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates <br> Md Kowsher, Tara Esmaeilbeig, Chun-Nam Yu, Mojtaba Soltanalian, Niloofar Yousefi | <img width="1002" alt="image" src="https://github.com/Kowsher/RoCoFT/blob/main/figures/rocoft.png"> | Github <br> Paper |
<br>Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models <br> Kai Yao, Penlei Gao, Lichun Li, Yuan Zhao, Xiaofeng Wang, Wei Wang, Jianke Zhu | <img width="1002" alt="image" src="https://arxiv.org/html/2410.11772v1/x3.png"> | Github <br> Paper |
<br>Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning <br> Nusrat Jahan Prottasha, Asif Mahmud, Md. Shohanur Islam Sobuj, Prakash Bhat, Md Kowsher, Niloofar Yousefi, Ozlem Ozmen Garibay | <img width="1002" alt="image" src="https://arxiv.org/html/2410.08598v1/x1.png"> | Paper |
<br>QEFT: Quantization for Efficient Fine-Tuning of LLMs <br> Changhun Lee, Jun-gyu Jin, Younghyun Cho, Eunhyeok Park | <img width="1002" alt="image" src="https://arxiv.org/html/2410.08661v1/x2.png"> | Github <br> Paper |
<br>BIPEFT: Budget-Guided Iterative Search for Parameter Efficient Fine-Tuning of Large Pretrained Language Models <br> Aofei Chang, Jiaqi Wang, Han Liu, Parminder Bhatia, Cao Xiao, Ting Wang, Fenglong Ma | <img width="1002" alt="image" src="https://arxiv.org/html/2410.09079v1/x1.png"> | Github <br> Paper |
<br>SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers <br> Viktoriia Chekalina, Anna Rudenko, Gleb Mezentsev, Alexander Mikhalev, Alexander Panchenko, Ivan Oseledets | <img width="1002" alt="image" src="https://arxiv.org/html/2410.07383v1/x1.png"> | Github <br> Paper |
SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching <br> Tianyi Zhang, Junda Su, Oscar Wu, Zhaozhuo Xu, Anshumali Shrivastava | <img width="1002" alt="image" src="https://arxiv.org/html/2410.06364v1/x1.png"> | Paper |
<br>Bone: Block Affine Transformation as Parameter Efficient Fine-tuning Methods for Large Language Models <br> Jiale Kang | <img width="1002" alt="image" src="https://arxiv.org/html/2409.15371v1/extracted/5865415/imgs/bone-free.png"> | Github <br> Paper |
Enabling Resource-Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines <br> Lei Gao, Amir Ziashahabi, Yue Niu, Salman Avestimehr, Murali Annavaram | <img width="1002" alt="image" src="figures/P-RGE.png"> | Paper |
Efficient Training
Title & Authors | Introduction | Links |
---|---|---|
<br>Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention <br> Xingtai Lv, Ning Ding, Kaiyan Zhang, Ermo Hua, Ganqu Cui, Bowen Zhou | <img width="1002" alt="image" src="https://arxiv.org/html/2411.02063v1/x1.png"> | Github <br> Paper |
Less is More: Extreme Gradient Boost Rank-1 Adaption for Efficient Finetuning of LLMs <br> Yifei Zhang, Hao Zhu, Aiwei Liu, Han Yu, Piotr Koniusz, Irwin King | <img width="1002" alt="image" src="https://arxiv.org/html/2410.19694v1/x3.png"> | Paper |
<br>COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training <br> Haocheng Xi, Han Cai, Ligeng Zhu, Yao Lu, Kurt Keutzer, Jianfei Chen, Song Han | <img width="1002" alt="image" src="https://github.com/NVlabs/COAT/blob/main/docs/figs/FP8PrecisionFlow.png"> | Github <br> Paper |
<br>BitPipe: Bidirectional Interleaved Pipeline Parallelism for Accelerating Large Models Training <br> Houming Wu, Ling Chen, Wenjie Yu | <img width="1002" alt="image" src="https://github.com/wuhouming/BitPipe/raw/main/docs/BitPipe_images/BitPipe-v.svg"> | Github <br> Paper |
Survey (or Benchmark)
Title & Authors | Introduction | Links |
---|---|---|
<br>LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators <br> Krishna Teja Chitty-Venkata, Siddhisanket Raskar, Bharat Kale, Farah Ferdaus et al | Github <br> Paper | |
<br>Prompt Compression for Large Language Models: A Survey <br> Zongqian Li, Yinhong Liu, Yixuan Su, Nigel Collier | <img width="1002" alt="image" src="https://arxiv.org/html/2410.12388v2/extracted/5933385/Figures/tree_overview.png"> | Github <br> Paper |
Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective <br> Jinhao Li, Jiaming Xu, Shan Huang, Yonghua Chen, Wen Li, Jun Liu, Yaoxiu Lian, Jiayi Pan, Li Ding, Hao Zhou, Guohao Dai | <img width="1002" alt="image" src="https://arxiv.org/html/2410.04466v1/x4.png"> | Paper |
A Survey of Low-bit Large Language Models: Basics, Systems, and Algorithms <br> Ruihao Gong, Yifu Ding, Zining Wang, Chengtao Lv, Xingyu Zheng, Jinyang Du, Haotong Qin, Jinyang Guo, Michele Magno, Xianglong Liu | <img width="1002" alt="image" src="https://arxiv.org/html/2409.16694v1/x1.png"> | Paper |
<br>Contextual Compression in Retrieval-Augmented Generation for Large Language Models: A Survey <br> Sourav Verma | <img width="1002" alt="image" src="figures/CCRAG_survey.png"> | Github <br> Paper |
Art and Science of Quantizing Large-Scale Models: A Comprehensive Overview <br> Yanshu Wang, Tong Yang, Xiyan Liang, Guoan Wang, Hanning Lu, Xu Zhe, Yaoming Li, Li Weitao | Paper | |
Hardware Acceleration of LLMs: A comprehensive survey and comparison <br> Nikoletta Koilia, Christoforos Kachris | Paper | |
A Survey on Symbolic Knowledge Distillation of Large Language Models <br> Kamal Acharya, Alvaro Velasquez, Houbing Herbert Song | <img width="1002" alt="image" src="https://arxiv.org/html/2408.10210v1/extracted/5727556/Images/DirectDistillation.png"> | Paper |