Home

Awesome

Efficient Large Language Models: A Survey

Efficient Large Language Models: A Survey [arXiv] (Version 1: 12/06/2023; Version 2: 12/23/2023; Version 3: 01/31/2024; Version 4: 05/23/2024, camera ready version of Transactions on Machine Learning Research)

Zhongwei Wan<sup>1</sup>, Xin Wang<sup>1</sup>, Che Liu<sup>2</sup>, Samiul Alam<sup>1</sup>, Yu Zheng<sup>3</sup>, Jiachen Liu<sup>4</sup>, Zhongnan Qu<sup>5</sup>, Shen Yan<sup>6</sup>, Yi Zhu<sup>7</sup>, Quanlu Zhang<sup>8</sup>, Mosharaf Chowdhury<sup>4</sup>, Mi Zhang<sup>1</sup>

<sup>1</sup>The Ohio State University, <sup>2</sup>Imperial College London, <sup>3</sup>Michigan State University, <sup>4</sup>University of Michigan, <sup>5</sup>Amazon AWS AI, <sup>6</sup>Google Research, <sup>7</sup>Boson AI, <sup>8</sup>Microsoft Research Asia

⚑News: Our survey has been officially accepted by Transactions on Machine Learning Research (TMLR), May 2024. Camera ready version is available at: [OpenReview]

@article{wan2023efficient,
  title={Efficient large language models: A survey},
  author={Wan, Zhongwei and Wang, Xin and Liu, Che and Alam, Samiul and Zheng, Yu and others},
  journal={arXiv preprint arXiv:2312.03863},
  volume={1},
  year={2023},
  publisher={no}
}

❀️ Community Support

This repository is maintained by <ins>tuidan</ins> (wang.15980@osu.edu), <ins>SUSTechBruce</ins> (wan.512@osu.edu), <ins>samiul272</ins> (alam.140@osu.edu), and <ins>mi-zhang</ins> (mizhang.1@osu.edu). We welcome feedback, suggestions, and contributions that can help improve this survey and repository so as to make them valuable resources to benefit the entire community.

We will actively maintain this repository by incorporating new research as it emerges. If you have any suggestions regarding our taxonomy, find any missed papers, or update any preprint arXiv paper that has been accepted to some venue, feel free to send us an email or submit a pull request using the following markdown format.

Paper Title, <ins>Conference/Journal/Preprint, Year</ins>  [[pdf](link)] [[other resources](link)].

πŸ“Œ What is This Survey About?

Large Language Models (LLMs) have demonstrated remarkable capabilities in many important tasks and have the potential to make a substantial impact on our society. Such capabilities, however, come with considerable resource demands, highlighting the strong need to develop effective techniques for addressing the efficiency challenges posed by LLMs. In this survey, we provide a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from <b>model-centric</b>, <b>data-centric</b>, and <b>framework-centric</b> perspective, respectively. We hope our survey and this GitHub repository can serve as valuable resources to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field.

πŸ€” Why Efficient LLMs are Needed?

img/image.jpg

Although LLMs are leading the next wave of AI revolution, the remarkable capabilities of LLMs come at the cost of their substantial resource demands. Figure 1 (left) illustrates the relationship between model performance and model training time in terms of GPU hours for LLaMA series, where the size of each circle is proportional to the number of model parameters. As shown, although larger models are able to achieve better performance, the amounts of GPU hours used for training them grow exponentially as model sizes scale up. In addition to training, inference also contributes quite significantly to the operational cost of LLMs. Figure 2 (right) depicts the relationship between model performance and inference throughput. Similarly, scaling up the model size enables better performance but comes at the cost of lower inference throughput (higher inference latency), presenting challenges for these models in expanding their reach to a broader customer base and diverse applications in a cost-effective way. The high resource demands of LLMs highlight the strong need to develop techniques to enhance the efficiency of LLMs. As shown in Figure 2, compared to LLaMA-1-33B, Mistral-7B, which uses grouped-query attention and sliding window attention to speed up inference, achieves comparable performance and much higher throughput. This superiority highlights the feasibility and significance of designing efficiency techniques for LLMs.

πŸ“– Table of Content

πŸ€– Model-Centric Methods

Model Compression

Quantization

Post-Training Quantization
Weight-Only Quantization
Weight-Activation Co-Quantization
Evaluation of Post-Training Quantization
Quantization-Aware Training

Parameter Pruning

Structured Pruning
Unstructured Pruning

Low-Rank Approximation

Knowledge Distillation

White-Box KD
Black-Box KD
Parameter-Sharing

Efficient Pre-Training

Mixed Precision Training

Scaling Models

Initialization Techniques

Training Optimizers

Efficient Fine-Tuning

Parameter-Efficient Fine-Tuning

Adapter-based Tuning
Low-Rank Adaptation
Prefix Tuning
Prompt Tuning

Memory-Efficient Fine-Tuning

MoE-Efficient-Supervised-Fine-Tuning

Efficient Inference

Parallel Decoding

Speculative Decoding

KV-Cache Optimization

Efficient Architecture

Efficient Attention

Sharing-based Attention
Feature Information Reduction
Kernelization or Low-Rank
Fixed Pattern Strategies
Learnable Pattern Strategies

Mixture of Experts

MoE-based LLMs
Algorithm-Level MoE Optimization

Long Context LLMs

Extrapolation and Interpolation
Recurrent Structure
Segmentation and Sliding Window
Memory-Retrieval Augmentation

Transformer Alternative Architecture

State Space Models
Other Sequential Models

πŸ”’ Data-Centric Methods

Data Selection

Data Selection for Efficient Pre-Training

Data Selection for Efficient Fine-Tuning

Prompt Engineering

Few-Shot Prompting

Demonstration Organization
Demonstration Selection
Demonstration Ordering
Template Formatting
Instruction Generation
Multi-Step Reasoning
Parallel Generation

Prompt Compression

Prompt Generation

πŸ§‘β€πŸ’» System-Level Efficiency Optimization and LLM Frameworks

System-Level Efficiency Optimization

System-Level Pre-Training Efficiency Optimization

System-Level Serving Efficiency Optimization

Serving System Design
Serving Performance Optimization

Algorithm-Hardware Co-Design

LLM Frameworks

<div align="center">
Efficient TrainingEfficient InferenceEfficient Fine-Tuning
DeepSpeed [Code]βœ…βœ…βœ…
Megatron [Code]βœ…βœ…βœ…
ColossalAI [Code]βœ…βœ…βœ…
Nanotron [Code]βœ…βœ…βœ…
MegaBlocks [Code]βœ…βœ…βœ…
FairScale [Code]βœ…βœ…βœ…
Pax [Code]βœ…βœ…βœ…
Composer [Code]βœ…βœ…βœ…
OpenLLM [Code]βŒβœ…βœ…
LLM-Foundry [Code]βŒβœ…βœ…
vLLM [Code]βŒβœ…βŒ
TensorRT-LLM [Code]βŒβœ…βŒ
TGI [Code]βŒβœ…βŒ
RayLLM [Code]βŒβœ…βŒ
MLC LLM [Code]βŒβœ…βŒ
Sax [Code]βŒβœ…βŒ
Mosec [Code]βŒβœ…βŒ
</div> <!-- [^1]: This table was updated Dec 2023. This table will require updates as cool new frameworks are being released frequently and current frameworks continue to mature at an accelerated rate. So please feel free to suggest any important distinguishing features or popular new frameworks-->