Home

Awesome

Continual Learning of Large Language Models: A Comprehensive Survey

This is an updating survey for Continual Learning of Large Language Models (CL-LLMs), a constantly updated and extended version for the manuscript "Continual Learning of Large Language Models: A Comprehensive Survey".

Welcome to contribute to this survey by submitting a pull request or opening an issue!

<p align="center"> <img src="fig/overview.png" alt="" data-canonical-src="fig/overview.png" width="100%"/> </p>

Update History

Table of Contents

Relevant Survey Papers

Continual Pre-Training of LLMs (CPT)

Domain-Adaptive Pre-Training of LLMs (DAP)

For General Domains

Legal Domain

Medical Domain

Financial Domain

Scientific Domain

Code Domain

Language Domain

Other Domains

Continual Fine-Tuning of LLMs (CFT)

General Continual Fine-Tuning

Continual Instruction Tuning (CIT)

Continual Model Refinement (CMR)

Continual Model Alignment (CMA)

Continual Multimodal LLMs (CMLLMs)

Continual LLMs Miscs

Reference

If you find our survey or this collection of papers useful, please consider citing our work by

@article{shi2024continual,
  title={Continual Learning of Large Language Models: A Comprehensive Survey},
  author={Shi, Haizhou and 
          Xu, Zihao and 
          Wang, Hengyi and 
          Qin, Weiyi and 
          Wang, Wenyuan and 
          Wang, Yibin and 
          Wang, Zifeng and 
          Ebrahimi, Sayna and 
          Wang, Hao},
  journal={arXiv preprint arXiv:2404.16789},
  year={2024}
}