Home

Awesome

<div align="center" > <img src="assets/banner.png" height=120 alt="" style="margin-bottom:px"/>

Simple linear attention language models balance the recall-throughput tradeoff.

arXiv GitHub

Model on HF Dataset on HF

<!-- [![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) --> <!-- [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-sm-dark.svg)](https://huggingface.co/models) --> </div>

Based is an efficient architecture inspired by recovering attention-like capabilities (i.e., recall). We do so by combining 2 simple ideas:

  1. Short sliding window attention (e.g., window size 64), to model fine-grained local dependencies
  2. "Dense" and global linear attention, to model long-range dependencies

In this way, we aim to capture the same dependencies as Transformers in a 100% subquadratic model, with exact softmax attention locally and a softmax-approximating linear attention for all other tokens. We find this helps close many of the performance gaps between Transformers and other sub-quadratic architecture proposals (matching perplexity is not all you need?).

Releases

Installation

Note. The code in this repository is tested on python=3.8.18 and torch=2.1.2. We recommend using these versions in a clean environment.

# clone the repository
git clone git@github.com:HazyResearch/based.git
cd based

# install torch
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118 # due to observed causal-conv1d dependency

# install based package
pip install -e .

# Note that sometimes the causal-conv1d interface changes (https://github.com/state-spaces/mamba/pull/168) in case you run into an error.

Pretrained Checkpoints

We are releasing the following checkpoints for research, trained at the 360M and 1.3B parameter scales. Each checkpoint is trained on the same 10B to 50B tokens (specified below) of the Pile corpus, using the same data order. The checkpoints are trained using the same code and infrastructure. A quick start notebook is provided at notebooks/03-24-quick-start.ipynb and further details are below:

Use the code below to load the Based checkpoints:

import torch
from transformers import AutoTokenizer
from based.models.gpt import GPTLMHeadModel

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPTLMHeadModel.from_pretrained_hf("hazyresearch/based-360m")
ArchitectureSizeTokensWandBHuggingFaceConfig
Based360m10b02-20-based-360mhazyresearch/based-360mreference/based-360m.yaml
Based1.4b10b02-21-based-1bhazyresearch/based-1breference/based-1b.yaml
Based1.4b50b03-31-based-1b-50bhazyresearch/based-1b-50breference/based_1.3b_50b_tok.yaml
Attention360m10b02-21-attn-360mhazyresearch/attn-360mreference/attn-360m.yaml
Attention1.4b10b02-25-attn-1bhazyresearch/attn-1breference/attn-360m.yaml
Mamba360m10b02-21-mamba-360mhazyresearch/mamba-360mreference/mamba-360m.yaml
Mamba1.4b10b02-22-mamba-1bhazyresearch/mamba-1breference/mamba-1b.yaml
Mamba1.4b50b03-31-mamba-1b-50bhazyresearch/mamba-1b-50breference/mamba-1.3b_50b_tok.yaml

Warning. We are releasing these models for the purpose of efficient architecture research. Because they have not been instruction fine-tuned or audited, they are not intended for use in any downstream applications.

The following code will run text generation for a prompt and print out the response.

input = tokenizer.encode("If I take one more step, it will be", return_tensors="pt").to("cuda")
output = model.generate(input, max_length=20)
print(tokenizer.decode(output[0]))

Note. For the checkpoints from other models, you will need to install other dependencies and use slightly different code.

To load the Attention models, use the following code:

import torch
from transformers import AutoTokenizer
from based.models.transformer.gpt import GPTLMHeadModel

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPTLMHeadModel.from_pretrained_hf("hazyresearch/attn-360m").to("cuda")

To use the Mamba checkpoints, first run pip install mamba-ssm and then use the following code:

import torch
from transformers import AutoTokenizer
from based.models.mamba import MambaLMHeadModel

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = MambaLMHeadModel.from_pretrained_hf("hazyresearch/mamba-360m").to("cuda")

Train

Follow the README.md instructions at based/train/ to train your own Based models!

Evaluate

In our paper, we evaluate pretrained language models on a standard suite of benchmarks from the LM Evaluation Harness, as well as a suite of three recall-intensive tasks:

Under evaluate, we have a clone of EleutherAI's lm-evaluation-harness that includes these new tasks and provides scripts for running all the evaluations from the paper. The following instructions can be used to reproduce our results on the LM-Eval harness using the pretrained checkpoints.

Setup.

cd evaluate 

# init the submodule and install
git submodule init
git submodule update
pip install -e . 

Running Evaluations.

We provide a script evaluate/launch.py that launch evaluations on the checkpoints we've released.

For example, running the following from the evaluate folder will evaluate the 360M Based, Mamba, and Attention models on the SWDE dataset.

You can set your huggingface cache directory to a location with sufficient space (export TRANSFORMERS_CACHE, export HF_HOME).

python launch.py \
    --task swde  --task fda --task squad_completion \
    --model "hazyresearch/based-360m" \
    --model "hazyresearch/mamba-360m" \
    --model "hazyresearch/attn-360m" \
    --model "hazyresearch/based-1b" \
    --model "hazyresearch/mamba-1b" \
    --model "hazyresearch/attn-1b"

Optionally, if you have access to multiple GPUs, you can pass the -p flag to run each evaluation on a different GPU. To run a limited number of samples for each task (e.g. 100), use the --limit=100 option.

Below we include the results produced from running the command above. Note: the results below are on the new models trained and evaluated with the cleaned-up code in this repository. As a result, the results reported in our paper differ slightly, however the trends and conclusions remain the same.

ArchitectureSizeHuggingFaceSWDEFDASQUAD
Based360mhazyresearch/based-360m25.6514.3424.23
Mamba360mhazyresearch/mamba-360m17.285.9024.83
Attention360mhazyresearch/attn-360m56.2657.8927.85
Based1.4bhazyresearch/attn-1b37.7119.0629.49
Mamba1.4bhazyresearch/attn-1b28.3511.0729.42
Attention1.4bhazyresearch/attn-1b69.0468.8735.89

Note that the results shown may differ slightly if the Flash-Attention kernels are not used during inference.

Experiments on Synthetic Data

In our paper, we demonstrate the recall-throughput tradeoff using a synthetic associative recall task (see Figure 2, below, and Figure 3 in the paper).

<div align="center" > <img src="assets/tradeoff.png" height=200 alt="" style="margin-bottom:px"/> </div>

The code for reproducing these figures is provided in a separate repository: HazyResearch/zoology. Follow the setup instruction in the Zoology README. The instructions for reproducing the are provided in zoology/experiments. For example, you can create the figure above using.

python -m zoology.launch zoology/experiments/arxiv24_based_figure2/configs.py -p

Benchmarking and Efficiency

Try out Based models with speedy ThunderKittens kernels!

git submodule init
git submodule update 
cd ThunderKittens/demos/based_demos

Enjoy!

Citation and Acknowledgements

This repo contains work based on the following papers. Please consider citing if you found the work or code useful:

@article{arora2024simple,
  title={Simple linear attention language models balance the recall-throughput tradeoff},
  author={Arora, Simran and Eyuboglu, Sabri and Zhang, Michael and Timalsina, Aman and Alberti, Silas and Zinsley, Dylan and Zou, James and Rudra, Atri and Ré, Christopher},
  journal={arXiv:2402.18668},
  year={2024}
}

@article{zhang2024hedgehog,
  title={The Hedgehog \& the Porcupine: Expressive Linear Attentions with Softmax Mimicry},
  author={Zhang, Michael and Bhatia, Kush and Kumbong, Hermann and R{\'e}, Christopher},
  journal={arXiv preprint arXiv:2402.04347},
  year={2024}
}

@article{arora2023zoology,
  title={Zoology: Measuring and Improving Recall in Efficient Language Models},
  author={Arora, Simran and Eyuboglu, Sabri and Timalsina, Aman and Johnson, Isys and Poli, Michael and Zou, James and Rudra, Atri and Ré, Christopher},
  journal={arXiv:2312.04927},
  year={2023}
}

This project was made possible by a number of other open source projects; please cite if you use their work! Notably:

Models in this project were trained using compute provided by:

Please reach out with feedback and questions!