Home

Awesome

<p align="center"> <img src="https://raw.githubusercontent.com/huggingface/alignment-handbook/main/assets/handbook.png"> </p> <p align="center"> šŸ¤— <a href="https://huggingface.co/collections/alignment-handbook/handbook-v01-models-and-datasets-654e424d22e6880da5ebc015" target="_blank">Models & Datasets</a> | šŸ“ƒ <a href="https://arxiv.org/abs/2310.16944" target="_blank">Technical Report</a> </p>

The Alignment Handbook

Robust recipes to continue pretraining and to align language models with human and AI preferences.

What is this?

Just one year ago, chatbots were out of fashion and most people hadn't heard about techniques like Reinforcement Learning from Human Feedback (RLHF) to align language models with human preferences. Then, OpenAI broke the internet with ChatGPT and Meta followed suit by releasing the Llama series of language models which enabled the ML community to build their very own capable chatbots. This has led to a rich ecosystem of datasets and models that have mostly focused on teaching language models to follow instructions through supervised fine-tuning (SFT).

However, we know from the InstructGPT and Llama2 papers that significant gains in helpfulness and safety can be had by augmenting SFT with human (or AI) preferences. At the same time, aligning language models to a set of preferences is a fairly novel idea and there are few public resources available on how to train these models, what data to collect, and what metrics to measure for best downstream performance.

The Alignment Handbook aims to fill that gap by providing the community with a series of robust training recipes that span the whole pipeline.

News šŸ—žļø

Links šŸ”—

How to navigate this project šŸ§­

This project is simple by design and mostly consists of:

We are also working on a series of guides to explain how methods like direct preference optimization (DPO) work, along with lessons learned from gathering human preferences in practice. To get started, we recommend the following:

  1. Follow the installation instructions to set up your environment etc.
  2. Replicate Zephyr-7b-Ī² by following the recipe instructions.

If you would like to train chat models on your own datasets, we recommend following the dataset formatting instructions here.

Contents

The initial release of the handbook will focus on the following techniques:

Installation instructions

To run the code in this project, first, create a Python virtual environment using e.g. Conda:

conda create -n handbook python=3.10 && conda activate handbook

Next, install PyTorch v2.1.2 - the precise version is important for reproducibility! Since this is hardware-dependent, we direct you to the PyTorch Installation Page.

You can then install the remaining package dependencies as follows:

git clone https://github.com/huggingface/alignment-handbook.git
cd ./alignment-handbook/
python -m pip install .

You will also need Flash Attention 2 installed, which can be done by running:

python -m pip install flash-attn --no-build-isolation

Note If your machine has less than 96GB of RAM and many CPU cores, reduce the MAX_JOBS arguments, e.g. MAX_JOBS=4 pip install flash-attn --no-build-isolation

Next, log into your Hugging Face account as follows:

huggingface-cli login

Finally, install Git LFS so that you can push models to the Hugging Face Hub:

sudo apt-get install git-lfs

You can now check out the scripts and recipes directories for instructions on how to train some models šŸŖ!

Project structure

ā”œā”€ā”€ LICENSE
ā”œā”€ā”€ Makefile                    <- Makefile with commands like `make style`
ā”œā”€ā”€ README.md                   <- The top-level README for developers using this project
ā”œā”€ā”€ chapters                    <- Educational content to render on hf.co/learn
ā”œā”€ā”€ recipes                     <- Recipe configs, accelerate configs, slurm scripts
ā”œā”€ā”€ scripts                     <- Scripts to train and evaluate chat models
ā”œā”€ā”€ setup.cfg                   <- Installation config (mostly used for configuring code quality & tests)
ā”œā”€ā”€ setup.py                    <- Makes project pip installable (pip install -e .) so `alignment` can be imported
ā”œā”€ā”€ src                         <- Source code for use in this project
ā””ā”€ā”€ tests                       <- Unit tests

Citation

If you find the content of this repo useful in your work, please cite it as follows via \usepackage{biblatex}:

@software{Tunstall_The_Alignment_Handbook,
  author = {Tunstall, Lewis and Beeching, Edward and Lambert, Nathan and Rajani, Nazneen and Huang, Shengyi and Rasul, Kashif and Bartolome, Alvaro and M. Rush, Alexander and Wolf, Thomas},
  license = {Apache-2.0},
  title = {{The Alignment Handbook}},
  url = {https://github.com/huggingface/alignment-handbook},
  version = {0.3.0.dev0}
}