Home

Awesome

LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

Implementation of the proposed Self-Extend in LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning.

Updates

Possible issues unrelated to Self-Extend:

Third-party Implementations

Llama.cpp https://github.com/ggerganov/llama.cpp

Llama.cpp has a great implementation and integration for self-extend! Have a try! 😄

1. Overview

This work elicits LLMs' inherent ability to handle long contexts without fine-tuning. The limited length of the training sequence during training may limit the application of Large Language Models (LLMs) on long input sequences for inference. In this work, we argue that existing LLMs themselves have inherent capabilities for handling long contexts. Based on this argument, we suggest extending LLMs' context window by themselves to fully utilize their inherent ability. We propose Self-Extend to stimulate LLMs' long context handling potential. The basic idea is to construct bi-level attention information: the group level and the neighbor level. The two levels are computed by the original model's self-attention, which means the proposed does not require any training.

<p align="center"> <img width="600" src="./img/self_ext.jpg">

2. How to Use SelfExtend

2.1 Setup

For current Llama Implementation, the python packages used are:

transformers==4.38.2
flash_attn==2.5.6 

We recommend to use this docker: hoytjin/selfextend_docker:v0.1

We provided patches for several models before. You may check legacy_patch_before_4_38. It contains legacy patches (llama, mistral, phi..etc) and README.

Installation

Clone the repository to your machine and copy your modeling files into the cloned repo directory.

2.2 Run

import SelfExtend

# Load your model, e.g., loaded_model = AutoModelForCausalLM.from_pretrained(model_path) 

# group size, neighbor window. 

SelfExtend.apply(loaded_model, group_size, window_size, enable_flash_attention=False)

# Inference, e.g., loaded_model.generate(...)

enable_flash_attention=False by default, you may set enable_flash_attention=True, if the model is loaed with FlashAttention enabled.

We use passkeyretrieval as an example to show how to use self-extend. You may check example.py:

python example.py

3.How to choose the group_size and neighbor_window

The following thoughts are based on our experience:

SelfExtend on 'Needle in a Haystack'

<p align="center"> <img width="600" src="./img/2d.jpg"> <p align="center"> <img width="600" src="./img/3d.jpg">

Emperical Rule:

Denoting the pretraining context window as $L$, the target extension length as $N$, the neighbor window as $W$, and the group size as $G$, the empirical rule for selecting hyperparameters is to ensure that the following inequality holds: $(\frac{1}{2} \sim \frac{2}{3}) \times L > W + \frac{N-W}{G}$ This is empirical, we believe it's due the fact that: large relative positions are not well trained. Empirically, only a portion($\frac{1}{2} \sim \frac{2}{3}$) of positions are well trained and SelfExtend should only leverage these well-trained relative position for extension. This finding explains: Excessively small group sizes can degrade performance, as they provide precise position information but require SelfExtend to utilize less well-trained relative positions for extension. Excessively large neighbor window sizes can also degrade performance, as they provide more neighbor information but necessitate the use of less well-trained relative positions for extension. The experimental results indicate that SelfExtend is not overly sensitive to hyperparameter selection. Predefined, heuristic values for group size and neighbor window size are often sufficient to achieve satisfactory performance.

[TLDR] SelfExtend is not overly sensitive to hyperparameter selection. One could use a representative task to find proper hyperparameters. Or direcly follow our empirical inequality: $(\frac{1}{2} \sim \frac{2}{3}) \times L > W + \frac{N-W}{G}$


If you find our method useful, please kindly cite our paper.

@misc{jin2024llm,
      title={LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning}, 
      author={Hongye Jin and Xiaotian Han and Jingfeng Yang and Zhimeng Jiang and Zirui Liu and Chia-Yuan Chang and Huiyuan Chen and Xia Hu},
      year={2024},
      eprint={2401.01325},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

4. Contributing

We welcome contributions from the research community to improve the effeicency of SelfExtend. If you have any idea or would like to report a bug, please open an issue or submit a pull request.

5. License

The code is released under the MIT License.