Home

Awesome

<h1 align="center"> <span><i>LLMCL</i></span> </h1> <h3 align="center"> Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning </h3>

Overview

LLMCL is a repository based on the Hugging Face Transformers library, designed to assess the continuous learning capability of large language models. Through this repository, users can easily customize datasets, specify models, and experiment with existing classical continuous learning methods.

Key Features

Quick Start

1.Install dependencies

conda create -n llmcl python==3.10
pip install -r requirements.txt

2.Start Training

./scripts/train_seq.sh

3.Inference

./scripts/infer_seq.sh

4. customize

You can easily customize scripts for your own use:

Reproduce

To Reproduce our results, you need
1. Request the access to llama2 model and download TRACE Benchmark , MedMCQA,JEC-QA to ./data_files folder.

2.run scripts customize your training scripts and run it.

Citation

If you find this repository helpful, please consider citing our work.

@misc{ren2024analyzing,
      title={Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning}, 
      author={Weijieying Ren and Xinlong Li and Lei Wang and Tianxiang Zhao and Wei Qin},
      year={2024},
      eprint={2402.18865},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}