Awesome
<div align="center"> <h2><img src="assets/logo.png" height="28px"/><i>Unlocking Efficiency in Large Language Model Inference:</i><br>A Comprehensive Survey of Speculative Decoding</h2> </div> <div align="center"> <b>Heming Xia</b><sup>1</sup>, <b>Zhe Yang</b><sup>2</sup>, <b>Qingxiu Dong</b><sup>2</sup>, <b>Peiyi Wang</b><sup>2</sup>, <b>Yongqi Li</b><sup>1</sup>, <b>Tao Ge</b><sup>3</sup>, <b>Tianyu Liu</b><sup>4</sup>, <b>Wenjie Li</b><sup>1</sup>, <b>Zhifang Sui</b><sup>2</sup> </div> <div align="center"> <sup>1</sup>Department of Computing, The Hong Kong Polytechnic University </div> <div align="center"> <sup>2</sup>National Key Laboratory for Multimedia Information Processing, Peking University </div> <div align="center"> <sup>3</sup>Microsoft Research Asia <sup>4</sup>Alibaba Group </div>This repository contains a regularly updated paper list for Speculative Decoding.
Content
Keywords Convention
Abbreviation
Conference
Drafting Methods in Speculative Decoding
Main Features
Papers
Survey
- Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding
Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui. [pdf], [code], 2024.01. - Beyond the Speculative Game: A Survey of Speculative Execution in Large Language Models
Chen Zhang, Zhuorui Liu, Dawei Song. [pdf], 2024.04. - Closer Look at Efficient Inference Methods: A Survey of Speculative Decoding
Hyun Ryu, Eric Kim. [pdf], 2024.11.
Speculative Decoding for Seq2Seq
-
Blockwise Parallel Decoding for Deep Autoregressive Models
Mitchell Stern, Noam Shazeer, Jakob Uszkoreit. [pdf], 2018.11. -
Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation
Heming Xia, Tao Ge, Peiyi Wang, Si-Qing Chen, Furu Wei, Zhifang Sui. [pdf], [code], 2022.03. -
Speculative Decoding with Big Little Decoder
Sehoon Kim, Karttikeya Mangalam, Suhong Moon, John Canny, Jitendra Malik, Michael W. Mahoney, Amir Gholami, Kurt Keutzer. [pdf], [code], 2023.02. -
Accelerating Transformer Inference for Translation via Parallel Decoding
Andrea Santilli, Silvio Severino, Emilian Postolache, Valentino Maiorca, Michele Mancusi, Riccardo Marin, Emanuele Rodolà. [pdf], 2023.05. -
SPEED: Speculative Pipelined Execution for Efficient Decoding
Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Hasan Genc, Kurt Keutzer, Amir Gholami, Sophia Shao. [pdf], 2023.10. -
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding
Sangmin Bae, Jongwoo Ko, Hwanjun Song, Se-Young Yun. [pdf], [code], 2023.10.
Speculative Decoding for LLMs
- Fast Inference from Transformers via Speculative Decoding
Yaniv Leviathan, Matan Kalman, Yossi Matias. [pdf], [code], 2022.11. - Accelerating Large Language Model Decoding with Speculative Sampling
Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, John Jumper. [pdf], [code], 2023.02. - Inference with Reference: Lossless Acceleration of Large Language Models
Nan Yang, Tao Ge, Liang Wang, Binxing Jiao, Daxin Jiang, Linjun Yang, Rangan Majumder, Furu Wei. [pdf], 2023.04. - SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification
Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, Chunan Shi, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, Zhihao Jia. [pdf], [code], 2023.05. - Predictive Pipelined Decoding: A Compute-Latency Trade-off for Exact LLM Decoding
Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, Kangwook Lee. [pdf], 2023.08. - Accelerating LLM Inference with Staged Speculative Decoding
Benjamin Spector, Chris Re. [pdf], 2023.08. - SpecTr: Fast Speculative Decoding via Optimal Transport
Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, Felix Yu, Michael Riley, Sanjiv Kumar. [pdf], 2023.08. - Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding
Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, Sharad Mehrotra. [pdf], [code], 2023.09. - Online Speculative Decoding
Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Ion Stoica, Zhijie Deng, Alvin Cheung, Hao Zhang. [pdf], 2023.10. - DistillSpec: Improving Speculative Decoding via Knowledge Distillation
Yongchao Zhou, Kaifeng Lyu, Ankit Singh Rawat, Aditya Krishna Menon, Afshin Rostamizadeh, Sanjiv Kumar, Jean-François Kagy, Rishabh Agarwal. [pdf], 2023.10. - REST: Retrieval-Based Speculative Decoding
Zhenyu He, Zexuan Zhong, Tianle Cai, Jason D Lee, Di He. [pdf], [code], 2023.11. - Speculative Contrastive Decoding
Hongyi Yuan, Keming Lu, Fei Huang, Zheng Yuan, Chang Zhou. [pdf], 2023.11. - PaSS: Parallel Speculative Sampling
Giovanni Monea, Armand Joulin, Edouard Grave. [pdf], 2023.11. - Cascade Speculative Drafting for Even Faster LLM Inference
Ziyi Chen, Xiaocong Yang, Jiacheng Lin, Chenkai Sun, Jie Huang, Kevin Chen-Chuan Chang. [pdf], [code], 2023.12. - SLiM: Speculative Decoding with Hypothesis Reduction
Hongxia Jin, Chi-Heng Lin, Shikhar Tuli, James Seale Smith, Yen-Chang Hsu, Yilin Shen. [pdf], 2023.12. - Graph-Structured Speculative Decoding
Zhuocheng Gong, Jiahao Liu, Ziyue Wang, Pengfei Wu, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan. [pdf], 2023.12. - Multi-Candidate Speculative Decoding
Sen Yang, Shujian Huang, Xinyu Dai, Jiajun Chen. [pdf], [code], 2024.01. - Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao. [pdf], [code], 2024.01. - BiTA: Bi-Directional Tuning for Lossless Acceleration in Large Language Models
Feng Lin, Hanling Yi, Hongbin Li, Yifan Yang, Xiaotian Yu, Guangming Lu, Rong Xiao. [pdf], [code], 2024.01. - EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty
Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang. [pdf], [code], 2024.01. - GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding
Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You. [pdf], [code], 2024.02. - Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
Yichao Fu, Peter Bailis, Ion Stoica, Hao Zhang. [pdf], [code], 2024.02. - Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding
Zachary Ankner, Rishab Parthasarathy, Aniruddha Nrusimha, Christopher Rinard, Jonathan Ragan-Kelley, William Brandon. [pdf], [code], 2024.02. - Speculative Streaming: Fast LLM Inference without Auxiliary Models
Nikhil Bhendawade, Irina Belousova, Qichen Fu, Henry Mason, Mohammad Rastegari, Mahyar Najibi. [pdf], 2024.02. - Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding
Hanling Yi, Feng Lin, Hongbin Li, Peiyang Ning, Xiaotian Yu, Rong Xiao. [pdf], 2024.02. - Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding
Zhuoming Chen, Avner May, Ruslan Svirschevski, Yuhsun Huang, Max Ryabinin, Zhihao Jia, Beidi Chen. [pdf], [code], 2024.02. - ProPD: Dynamic Token Tree Pruning and Generation for LLM Parallel Decoding
Shuzhang Zhong, Zebin Yang, Meng Li, Ruihao Gong, Runsheng Wang, Ru Huang. [pdf], 2024.02. - Ouroboros: Speculative Decoding with Large Model Enhanced Drafting
Weilin Zhao, Yuxiang Huang, Xu Han, Chaojun Xiao, Zhiyuan Liu, Maosong Sun. [pdf], [code], 2024.02. - Recursive Speculative Decoding: Accelerating LLM Inference via Sampling Without Replacement
Wonseok Jeon, Mukul Gagrani, Raghavv Goel, Junyoung Park, Mingu Lee, Christopher Lott. [pdf], 2024.02. - Chimera: A Lossless Decoding Method for Accelerating Large Language Models Inference by Fusing all Tokens
Ziqian Zeng, Jiahong Yu, Qianshi Pang, Zihao Wang, Huiping Zhuang, Cen Chen. [pdf], 2024.02. - Speculative Decoding via Early-exiting for Faster LLM Inference with Thompson Sampling Control Mechanism
Jiahao Liu, Qifan Wang, Jingang Wang, Xunliang Cai. [pdf], 2024.02. - Specuna: A Speculative Vicuna with Shallow Layer Reuse
Anonymous ACL submission. [pdf], 2024.02. - Minions: Accelerating Large Language Model Inference with Adaptive and Collective Speculative Decoding
Siqi Wang, Hailong Yang, Xuezhu Wang, Tongxuan Liu, Pengbo Wang, Xuning Liang, Kejie Ma, Tianyu Feng, Xin You, Yongjun Bao, Yi Liu, Zhongzhi Luan, Depei Qian. [pdf], 2024.02. - CLLMs: Consistency Large Language Models
Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, Hao Zhang. [pdf], [code], [blog], 2024.03. - Recurrent Drafter for Fast Speculative Decoding in Large Language Models
Aonan Zhang, Chong Wang, Yi Wang, Xuanyu Zhang, Yunfei Cheng. [pdf], 2024.03. - Block Verification Accelerates Speculative Decoding
Ziteng Sun, Uri Mendlovic, Yaniv Leviathan, Asaf Aharoni, Ahmad Beirami, Jae Hun Ro, Ananda Theertha Suresh. [pdf], 2024.03. - SDSAT: Accelerating LLM Inference through Speculative Decoding with Semantic Adaptive Tokens
Chengbo Liu, Yong Zhu. [pdf], 2024.03. - Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding
Jie Ou, Yueming Chen, Wenhong Tian. [pdf], 2024.04. - Exploring and Improving Drafts in Blockwise Parallel Decoding
Taehyeon Kim, Ananda Theertha Suresh, Kishore Papineni, Michael Riley, Sanjiv Kumar, Adrian Benton. [pdf], 2024.04. - Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration
Pengfei Wu, Jiahao Liu, Zhuocheng Gong, Qifan Wang, Jinpeng Li, Jingang Wang, Xunliang Cai, Dongyan Zhao. [pdf], 2024.04. - BASS: Batched Attention-optimized Speculative Sampling
Haifeng Qian, Sujan Kumar Gonugondla, Sungsoo Ha, Mingyue Shang, Sanjay Krishna Gouda, Ramesh Nallapati, Sudipta Sengupta, Xiaofei Ma, Anoop Deoras. [pdf], 2024.04. - LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed A Aly, Beidi Chen, Carole-Jean Wu. [pdf], 2024.04. - Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
Fangcheng Liu, Yehui Tang, Zhenhua Liu, Yunsheng Ni, Kai Han, Yunhe Wang. [pdf], [code], 2024.04. - Accelerating Production LLMs with Combined Token/Embedding Speculators
Davis Wertheimer, Joshua Rosenkranz, Thomas Parnell, Sahil Suneja, Pavithra Ranganathan, Raghu Ganti, Mudhakar Srivatsa. [pdf], 2024.04. - Better & Faster Large Language Models via Multi-token Prediction
Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozière, David Lopez-Paz, Gabriel Synnaeve. [pdf], 2024.04. - Clover: Regressive Lightweight Speculative Decoding with Sequential Knowledge
Bin Xiao, Chunan Shi, Xiaonan Nie, Fan Yang, Xiangwei Deng, Lei Su, Weipeng Chen, Bin Cui. [pdf], 2024.05. - Accelerating Speculative Decoding using Dynamic Speculation Length
Jonathan Mamou, Oren Pereg, Daniel Korat, Moshe Berchansky, Nadav Timor, Moshe Wasserblat, Roy Schwartz. [pdf], 2024.05. - EMS-SD: Efficient Multi-sample Speculative Decoding for Accelerating Large Language Models
Yunsheng Ni, Chuanjian Liu, Yehui Tang, Kai Han, Yunhe Wang. [pdf], [code], 2024.05. - Nearest Neighbor Speculative Decoding for LLM Generation and Attribution
Minghan Li, Xilun Chen, Ari Holtzman, Beidi Chen, Jimmy Lin, Wen-tau Yih, Xi Victoria Lin. [pdf], 2024.05. - Hardware-Aware Parallel Prompt Decoding for Memory-Efficient Acceleration of LLM Inference
Hao (Mark)Chen, Wayne Luk, Ka Fai Cedric Yiu, Rui Li, Konstantin Mishchenko, Stylianos I. Venieris, Hongxiang Fan. [pdf], [code], 2024.05. - Faster Cascades via Speculative Decoding
Harikrishna Narasimhan, Wittawat Jitkrittum, Ankit Singh Rawat, Seungyeon Kim, Neha Gupta, Aditya Krishna Menon, Sanjiv Kumar. [pdf], 2024.05. - S3D: A Simple and Cost-Effective Self-Speculative Decoding Scheme for Low-Memory GPUs
Wei Zhong, Manasa Bharadwaj. [pdf], 2024.05. - SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths
Kaixuan Huang, Xudong Guo, Mengdi Wang. [pdf], 2024.05. - Distributed Speculative Inference of Large Language Models
Nadav Timor, Jonathan Mamou, Daniel Korat, Moshe Berchansky, Oren Pereg, Moshe Wasserblat, Tomer Galanti, Michal Gordon. [pdf], 2024.05. - Accelerated Speculative Sampling Based on Tree Monte Carlo
Zhengmian Hu, Heng Huang. [pdf], 2024.05. - SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices
Ruslan Svirschevski, Avner May, Zhuoming Chen, Beidi Chen, Zhihao Jia, Max Ryabinin. [pdf], 2024.06. - Amphista: Accelerate LLM Inference with Bi-directional Multiple Drafting Heads in a Non-autoregressive Style
Zeping Li, Xinlong Yang, Ziheng Gao, Ji Liu, Zhuang Liu, Dong Li, Jinzhang Peng, Lu Tian, Emad Barsoum. [pdf], 2024.06. - Optimizing Speculative Decoding for Serving Large Language Models Using Goodput
Xiaoxuan Liu, Cade Daniel, Langxiang Hu, Woosuk Kwon, Zhuohan Li, Xiangxi Mo, Alvin Cheung, Zhijie Deng, Ion Stoica, Hao Zhang. [pdf], 2024.06. - EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees
Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang. [pdf], 2024.06. - Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Yixuan Wang, Xianzhen Luo, Fuxuan Wei, Yijun Liu, Qingfu Zhu, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che. [pdf], 2024.06. - OPT-Tree: Speculative Decoding with Adaptive Draft Tree Structure
Jikai Wang, Yi Su, Juntao Li, Qinrong Xia, Zi Ye, Xinyu Duan, Zhefeng Wang, Min Zhang. [pdf], 2024.06. - Cerberus: Efficient Inference with Adaptive Parallel Decoding and Sequential Knowledge Enhancement
Yuxuan Liu, Wenyuan Li, Laizhong Cui, Hailiang Yang. [pdf], 2024.06. - SpecHub: Provable Acceleration to Multi-Draft Speculative Decoding
Ryan Sun, Tianyi Zhou, Xun Chen, Lichao Sun. [pdf], 2024.06. - S2D: Sorted Speculative Decoding For More Efficient Deployment of Nested Large Language Models
Parsa Kavehzadeh, Mohammadreza Pourreza, Mojtaba Valipour, Tinashu Zhu, Haoli Bai, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh. [pdf], 2024.07. - Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference
Zongyue Qin, Ziniu Hu, Zifan He, Neha Prakriya, Jason Cong, Yizhou Sun. [pdf], 2024.07. - PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation
Branden Butler, Sixing Yu, Arya Mazaheri, Ali Jannesari. [pdf], 2024.07. - Adaptive Draft-Verification for Efficient Large Language Model Decoding
Xukun Liu, Bowen Lei, Ruqi Zhang, Dongkuan Xu. [pdf], 2024.07. - Graph-Structured Speculative Decoding
Zhuocheng Gong, Jiahao Liu, Ziyue Wang, Pengfei Wu, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan. [pdf], 2024.07. - Inference acceleration for large language models using "stairs" assisted greedy generation
Domas Grigaliūnas, Mantas Lukoševičius. [pdf], 2024.07. - Clover-2: Accurate Inference for Regressive Lightweight Speculative Decoding
Bin Xiao, Lujun Gui, Lei Su, Weipeng Chen. [pdf], [code], 2024.08. - CREST: Effectively Compacting a Datastore For Retrieval-Based Speculative Decoding
Sophia Ho, Jinsol Park, Patrick Wang. [pdf], 2024.08. - Speculative Diffusion Decoding: Accelerating Language Generation through Diffusion
Jacob K Christopher, Brian R Bartoldson, Bhavya Kailkhura, Ferdinando Fioretto. [pdf], 2024.08. - KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft Heads with Adversarial Learning
Kaiqi Zhang, Jing Zhao, Rui Chen. [pdf], 2024.08. - Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Xianzhen Luo, Yixuan Wang, Qingfu Zhu, Zhiming Zhang, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che. [pdf], 2024.08. - Parallel Speculative Decoding with Adaptive Draft Length
Tianyu Liu, Yun Li, Qitan Lv, Kai Liu, Jianchen Zhu, Winston Hu. [pdf], [code], [blog], 2024.08. - Boosting Lossless Speculative Decoding via Feature Sampling and Partial Alignment Distillation
Lujun Gui, Bin Xiao, Lei Su, Weipeng Chen. [pdf], 2024.08. - Harmonized Speculative Sampling
Lefan Zhang, Xiaodan Wang, Yanhua Huang, Ruiwen Xu. [pdf], 2024.08. - Dynamic Depth Decoding: Faster Speculative Decoding for LLMs
Oscar Brown, Zhengjie Wang, Andrea Do, Nikhil Mathew, Cheng Yu. [pdf], 2024.08. - The Mamba in the Llama: Distilling and Accelerating Hybrid Models
Junxiong Wang, Daniele Paliotta, Avner May, Alexander M. Rush, Tri Dao. [pdf], [code], 2024.08. - Improving Multi-candidate Speculative Decoding
Xiaofan Lu, Yixiao Zeng, Feiyang Ma, Zixu Yu, Marco Levorato. [pdf], 2024.10. - Draft on the Fly: Adaptive Self-Speculative Decoding using Cosine Similarity
Michael R. Metel, Peng Lu, Boxing Chen, Mehdi Rezagholizadeh, Ivan Kobyzev. [pdf], 2024.10. - Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Zongyue Qin, Zifan He, Neha Prakriya, Jason Cong, Yizhou Sun. [pdf], 2024.10. - Mixture of Attentions For Speculative Decoding
Matthieu Zimmer, Milan Gritta, Gerasimos Lampouras, Haitham Bou Ammar, Jun Wang. [pdf], 2024.10. - SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration
Heming Xia, Yongqi Li, Jun Zhang, Cunxiao Du, Wenjie Li. [pdf], [code], 2024.10. - ParallelSpec: Parallel Drafter for Efficient Speculative Decoding
Zilin Xiao, Hongming Zhang, Tao Ge, Siru Ouyang, Vicente Ordonez, Dong Yu. [pdf], 2024.10. - Polybasic Speculative Decoding Under a Theoretical Perspective
Anonymous ICLR submission. [pdf], 2024.10. - Towards Optimal Multi-draft Speculative Decoding
Anonymous ICLR submission. [pdf], 2024.10. - A Unified Framework for Speculative Decoding with Multiple Drafters as a Bandit
Anonymous ICLR submission. [pdf], 2024.10. - DySpec: Faster Speculative Decoding with Dynamic Token Tree Structure
Yunfan Xiong, Ruoyu Zhang, Yanzeng Li, Tianhao Wu, Lei Zou. [pdf], 2024.10. - Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment
Anonymous ICLR submission. [pdf], 2024.10. - QSpec: Speculative Decoding with Complementary Quantization Schemes
Juntao Zhao, Wenhao Lu, Sheng Wang, Lingpeng Kong, Chuan Wu. [pdf], 2024.10. - Multi-Draft Speculative Sampling: Canonical Architectures and Theoretical Limits
Ashish Khisti, M.Reza Ebrahimi, Hassan Dbouk, Arash Behboodi, Roland Memisevic, Christos Louizos. [pdf], 2024.10. - CASD: Enhancing Generation Accuracy via Context-Aware Speculative Decoding
Anonymous ICLR submission. [pdf], 2024.10. - Optimized Multi-Token Joint Decoding With Auxiliary Model for LLM Inference
Anonymous ICLR submission. [pdf], 2024.10. - A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models
Anonymous ICLR submission. [pdf], 2024.10. - Semi-autoregressive Decoding for Efficient LLM Inference
Anonymous ICLR submission. [pdf], 2024.10. - Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding
Anonymous ICLR submission. [pdf], 2024.10. - Fast and Accurate Language Model Decoding via Parallel Token Processing
Anonymous ICLR submission. [pdf], 2024.10. - AMUSD: Asynchronous Multi-Device Speculative Decoding for LLM Acceleration
Bradley McDanel. [pdf], [code], 2024.10. - AdaEDL: Early Draft Stopping for Speculative Decoding of Large Language Models via an Entropy-based Lower Bound on Token Acceptance Probability
Sudhanshu Agrawal, Wonseok Jeon, Mingu Lee. [pdf], 2024.10. - FIRP: Faster LLM inference via future intermediate representation prediction
Pengfei Wu, Jiahao Liu, Zhuocheng Gong, Qifan Wang, Jinpeng Li, Jingang Wang, Xunliang Cai, Dongyan Zhao. [pdf], 2024.10. - The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation
Lawrence Stewart, Matthew Trager, Sujan Kumar Gonugondla, Stefano Soatto. [pdf], 2024.11. - SuffixDecoding: A Model-Free Approach to Speeding Up Large Language Model Inference
Gabriele Oliaro, Zhihao Jia, Daniel Campos, Aurick Qiao. [pdf], 2024.11. - SSSD: Simply-Scalable Speculative Decoding
Michele Marzollo, Jiawei Zhuang, Niklas Roemer, Lorenz K. Müller, Lukas Cavigelli. [pdf], 2024.11. - FastDraft: How to Train Your Draft
Ofir Zafrir, Igor Margulis, Dorin Shteyman, Guy Boudoukh. [pdf], 2024.11. - SAM Decoding: Speculative Decoding via Suffix Automaton
Yuxuan Hu, Ke Wang, Jing Zhang, Cuiping Li, Hong Chen. [pdf], 2024.11. - Draft Model Knows When to Stop: A Self-Verification Length Policy for Speculative Decoding
Ziyin Zhang, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Rui Wang, Zhaopeng Tu. [pdf], 2024.11. - PLD+: Accelerating LLM inference by leveraging Language Model Artifacts
Shwetha Somasundaram, Anirudh Phukan, Apoorv Saxena. [pdf], 2024.12. - Speculative Decoding with CTC-based Draft Model for LLM Inference Acceleration
Zhuofan Wen, Shangtong Gui, Yang Feng. [pdf], 2024.12.
Multimodal Speculative Decoding
- On Speculative Decoding for Multimodal Large Language Models
Mukul Gagrani, Raghavv Goel, Wonseok Jeon, Junyoung Park, Mingu Lee, Christopher Lott. [pdf], 2024.04. - LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Doohyuk Jang, Sihwan Park, June Yong Yang, Yeonsung Jung, Jihun Yun, Souvik Kundu, Sung-Yub Kim, Eunho Yang. [pdf], 2024.10. - Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding
Yao Teng, Han Shi, Xian Liu, Xuefei Ning, Guohao Dai, Yu Wang, Zhenguo Li, Xihui Liu. [pdf], 2024.10. - In-batch Ensemble Drafting: Toward Fast and Robust Speculative Decoding for Multimodal Language Models
Anonymous ICLR submission. [pdf], 2024.10. - Continuous Speculative Decoding for Autoregressive Image Generation
Zili Wang, Robert Zhang, Kun Ding, Qi Yang, Fei Li, Shiming Xiang. [pdf], 2024.11.
Long-Context Speculative Decoding
- TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding
Hanshi Sun, Zhuoming Chen, Xinyu Yang, Yuandong Tian, Beidi Chen. [pdf], [code], 2024.04. - MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
Jian Chen, Vashisth Tiwari, Ranajoy Sadhukhan, Zhuoming Chen, Jinyuan Shi, Ian En-Hsu Yen, Beidi Chen. [pdf], [code], 2024.08.
Alignment
- Direct Alignment of Draft Model for Speculative Decoding with Chat-Fine-Tuned LLMs
Raghavv Goel, Mukul Gagrani, Wonseok Jeon, Junyoung Park, Mingu Lee, Christopher Lott. [pdf], 2024.02.
Benchmarks
- Spec-Bench: A Comprehensive Benchmark for Speculative Decoding
Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui. [pdf], [code], [blog], 2024.02.
Applications
- Instantaneous Grammatical Error Correction with Shallow Aggressive Decoding
Xin Sun, Tao Ge, Furu Wei, Houfeng Wang. [pdf], [code], 2021.07. - LLMCad: Fast and Scalable On-device Large Language Model Inference
Daliang Xu, Wangsong Yin, Xin Jin, Ying Zhang, Shiyun Wei, Mengwei Xu, Xuanzhe Liu. [pdf], 2023.09. - Accelerating Retrieval-Augmented Language Model Serving with Speculation
Zhihao Zhang, Alan Zhu, Lijie Yang, Yihua Xu, Lanting Li, Phitchaya Mangpo Phothilimthana, Zhihao Jia. [pdf], 2023.10. - Lookahead: An Inference Acceleration Framework for Large Language Model with Lossless Generation Accuracy
Yao Zhao, Zhitian Xie, Chenyi Zhuang, Jinjie Gu. [pdf], [code], 2023.12. - A Simple Framework to Accelerate Multilingual Language Model for Monolingual Text Generation
Jimin Hong, Gibbeum Lee, Jaewoong Cho. [pdf], 2024.01. - Accelerating Greedy Coordinate Gradient via Probe Sampling
Yiran Zhao, Wenyue Zheng, Tianle Cai, Xuan Long Do, Kenji Kawaguchi, Anirudh Goyal, Michael Shieh. [pdf], 2024.03. - Optimized Speculative Sampling for GPU Hardware Accelerators
Dominik Wagner, Seanie Lee, Ilja Baumann, Philipp Seeberger, Korbinian Riedhammer, Tobias Bocklet. [pdf], 2024.06. - Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters
Euiin Yi, Taehyeon Kim, Hongseok Jeung, Du-Seong Chang, Se-Young Yun. [pdf], 2024.06. - SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding
Zhenglin Wang, Jialong Wu, Yilong Lai, Congzhi Zhang, Deyu Zhou. [pdf], 2024.06. - Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting
Zilong Wang, Zifeng Wang, Long Le, Huaixiu Steven Zheng, Swaroop Mishra, Vincent Perot, Yuwei Zhang, Anush Mattapalli, Ankur Taly, Jingbo Shang, Chen-Yu Lee, Tomas Pfister. [pdf], 2024.07. - A Decoding Acceleration Framework for Industrial Deployable LLM-based Recommender Systems
Yunjia Xi, Hangyu Wang, Bo Chen, Jianghao Lin, Menghui Zhu, Weiwen Liu, Ruiming Tang, Weinan Zhang, Yong Yu. [pdf], 2024.08. - Faster Speech-LLaMA Inference with Multi-token Prediction
Desh Raj, Gil Keren, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli. [pdf], 2024.09. - Interactive Speculative Planning: Enhance Agent Efficiency through Co-design of System and User Interface
Wenyue Hua, Mengting Wan, Shashank Vadrevu, Ryan Nadel, Yongfeng Zhang, Chi Wang. [pdf], 2024.10. - Speculative Coreset Selection for Task-Specific Fine-tuning
Xiaoyu Zhang, Juan Zhai, Shiqing Ma, Chao Shen, Tianlin Li, Weipeng Jiang, Yang Liu. [pdf], 2024.10. - Efficient Inference for Large Language Model-based Generative Recommendation
Xinyu Lin, Chaoqun Yang, Wenjie Wang, Yongqi Li, Cunxiao Du, Fuli Feng, See-Kiong Ng, Tat-Seng Chua. [pdf], 2024.10. - Watermarking using Semantic-aware Speculative Sampling: from Theory to Practice
Anonymous ICLR submission. [pdf], 2024.10. - Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling
Wenda Xu, Rujun Han, Zifeng Wang, Long T. Le, Dhruv Madeka, Lei Li, William Yang Wang, Rishabh Agarwal, Chen-Yu Lee, Tomas Pfister. [pdf], 2024.10. - Root Defence Strategies: Ensuring Safety of LLM at the Decoding Level
Xinyi Zeng, Yuying Shang, Yutao Zhu, Jiawei Chen, Yu Tian. [pdf], 2024.10. - TreeBoN: Enhancing Inference-Time Alignment with Speculative Tree-Search and Best-of-N Sampling
Jiahao Qiu, Yifu Lu, Yifan Zeng, Jiacheng Guo, Jiayi Geng, Huazheng Wang, Kaixuan Huang, Yue Wu, Mengdi Wang. [pdf], 2024.10. - Fast and High-Quality Auto-Regressive Speech Synthesis via Speculative Decoding
Bohan Li, Hankun Wang, Situo Zhang, Yiwei Guo, Kai Yu. [pdf], 2024.10. - Constrained Decoding with Speculative Lookaheads
Nishanth Nakshatri, Shamik Roy, Rajarshi Das, Suthee Chaidaroon, Leonid Boytsov, Rashmi Gangadharaiah. [pdf], 2024.12.
Analysis
- The Synergy of Speculative Decoding and Batching in Serving Large Language Models
Qidong Su, Christina Giannoula, Gennady Pekhimenko. [pdf], 2023.10. - Decoding Speculative Decoding
Minghao Yan, Saurabh Agarwal, Shivaram Venkataraman. [pdf], [code], 2024.02. - How Speculative Can Speculative Decoding Be?
Zhuorui Liu, Chen Zhang, Dawei Song. [pdf], [code], 2024.05. - Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding
Kaiyan Zhang, Jianyu Wang, Ning Ding, Biqing Qi, Ermo Hua, Xingtai Lv, Bowen Zhou. [pdf], [code], 2024.06. - Temperature-Centric Investigation of Speculative Decoding with Knowledge Distillation
Siru Ouyang, Shuohang Wang, Minhao Jiang, Ming Zhong, Donghan Yu, Jiawei Han, Yelong Shen. [pdf], [code], 2024.06. - A Theoretical Perspective for Speculative Decoding Algorithm
Ming Yin, Minshuo Chen, Kaixuan Huang, Mengdi Wang. [pdf], 2024.11. - Privacy Risks of Speculative Decoding in Large Language Models
Jiankun Wei, Abdulrahman Abdulrazzag, Tianchen Zhang, Adel Muursepp, Gururaj Saileshwar. [pdf], 2024.11.
Other Techniques
- DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference
Jinwei Yao, Kaiqi Chen, Kexun Zhang, Jiaxuan You, Binhang Yuan, Zeke Wang, Tao Lin. [pdf], 2024.10.
Blog & Project
Assisted Generation: a new direction toward low-latency text generation. Huggingface. 2023.05. [Blog] [Code]
Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads. Princeton, UIUC. 2023.09. [Blog] [Code]
An Optimal Lossy Variant of Speculative Decoding. Unsupervised Thoughts (blog). 2023.09. [Blog] [Code]
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding. LMSys. 2023.11. [Blog] [Code]
Accelerating Generative AI with PyTorch II: GPT, Fast. Pytorch. 2023.11. [Blog] [Code]
Prompt Lookup Decoding. Apoorv Saxena. 2023.11. [Code] [Colab]
REST: Retrieval-Based Speculative Decoding. Peking University, Princeton University. 2023.11. [Blog] [Code]
EAGLE: Lossless Acceleration of LLM Decoding by Feature Extrapolation. Vector Institute, University of Waterloo, Peking University. 2023.12. [Blog] [Code]
SEQUOIA: Serving exact Llama2-70B on an RTX4090 with half-second per token latency. Carnegie Mellon University, Together AI, Yandex, Meta AI. 2024.02. [Blog] [Code]
The Mamba in the Llama: Distilling and Accelerating Hybrid Models. Together AI. 2024.09. [Blog] [Code]
How Speculative Decoding Boosts vLLM Performance by up to 2.8x. vLLM Team. 2024.10. [Blog]
Contributors
<a href="https://github.com/hemingkx/SpeculativeDecodingPapers/graphs/contributors"> <img src="https://contrib.rocks/image?repo=hemingkx/SpeculativeDecodingPapers" /> </a>Contributing to this paper list
- There are cases where we miss important works in this field, please feel free to contribute and promote your awesome work or other related works here! Thanks for the efforts in advance.
Citation
If you find the resources in this repository useful, please cite our paper:
@inproceedings{xia-etal-2024-unlocking,
title = "Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding",
author = "Xia, Heming and Yang, Zhe and Dong, Qingxiu and Wang, Peiyi and Li, Yongqi and Ge, Tao and Liu, Tianyu and Li, Wenjie and Sui, Zhifang",
editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.456",
doi = "10.18653/v1/2024.findings-acl.456",
pages = "7655--7671",
}