Home

Awesome

ReMax: A Simple, Effective, and Efficient Method for Aligning Large Language Models

Overview

ReMax is a reinforcement learning method, tailored for reward maximization in RLHF.

<img src='./images/framework.png' width='600'>

Simple Implementation

ReMax is easy to implement (with 6 lines of code). We provide an implementation based on the DeepSpeed framework in this resposity.

<img src='./images/algorithm.png' width='600'>

Memory Efficient

ReMax is memory-efficient. Compared with PPO, ReMax can save about 50% GPU memory consumption, which could be allocated for 1.3x large batch size.

<details> <summary>Results of tuning Llama2-7B with A100-80GB GPUs</summary>
GPUsOffloadMethodMaximum Batch Size
4FalsePPO❌ (OOM)
4FalseReMax4x26=104
4TruePPO4x30=120
4TrueReMax4x40=160
1TruePPO1x32=32
1TrueReMax1x42=42

*: Gradient checkpointing and ZeRO-2 are used for LLM.

*: ZeRO-3 and offload are used for the reward model and the reference model.</details>

</details>

Fast Training

ReMax runs fast. It does not need to train a value model and requires fewer computations. Usually, it can achieve about 2x training speed-up.

<details> <summary>Results of tuning Llama2-7B with A100-80GB GPUs</summary>
GPUsOffloadMethodTotal Training Time
4FalsePPO❌ (OOM)
4FalseReMax2.4h
4TruePPO6.0h
4TrueReMax2.8h
1TruePPO22.0h
1TrueReMax10.2h

*: Gradient checkpointing and ZeRO-2 are used for LLM.

*: ZeRO-3 and offload are used for the reward model and the reference model.

*: Measurement is based on 45k training samples (with 1 epoch) from the full-hh-rlhf dataset.

</details>

Easy to Tune

ReMax is easy to tune for good performance. On the AlpacaEval benchmark, when judeged by GPT-4, ReMax achieves win rates of 84.22%, 75.28%, and 63.60% over SFT, DPO, and PPO, respectively.

<img src='./images/alpacaeval_result.png' width='600'>

Change Log

How to use

Prepare

The Python environment can be set up using Anaconda with the provided environment.yml file.

conda env create -f environment.yml
conda activate llm

Step 1 SFT

cd step1_supervised_finetuning

# OPT(1.3B)
bash training_scripts/opt/run_opt_1.3b.sh

# Llama2(7B)
bash training_scripts/llama2/run_llama2_1.3b.sh

Step 2 Reward Learning

cd step2_reward_model_finetuning

# OPT(1.3B)
bash training_scripts/opt/run_opt_1.3b.sh

# Llama2(7B)
bash training_scripts/llama2/run_llama2_1.3b.sh

Step 3 RLHF

cd step3_rlhf_finetuning

# OPT(1.3B)
bash training_scripts/opt/run_opt_1.3b.sh

# Llama2(7B)
bash training_scripts/llama2/run_llama2_1.3b.sh

Acknowledgements

Our code is heavily based on the DeepSpeed-Chat. Please follow the detailed instructions from DeepSpeed-Chat.

Bibtex

If you find this code is helpful, please cite our paper in the following format.

@article{li2023remax,
  title     = {ReMax: A Simple, Effective, and Efficient Method for Aligning Large Language Models},
  author    = {Li, Ziniu and Xu, Tian and Zhang, Yushun and Yu, Yang and Sun, RUoyu and Luo, Zhi-Quan},
  booktitle = {arXiv preprint arXiv:2310.10505},
  year      = {2023},
}