Home

Awesome

<h1 align="center"> <img src="./docs/static/images/rho_logo.png" width="100" alt="rho-logo" /> <br> Rho-1: Not All Tokens Are What You Need </h1> <div align="center">

</div> <p align="center"> <a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> • <a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> • <a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> • <a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a> <!-- <a href="https://twitter.com/zebgou/status/1778676535404396697"><b>[🐦 Twitter]</b></a> • --> <!-- <a href="https://huggingface.co/spaces/zubingou/rho-1"><b>[🤖 Gradio Demo]</b></a> --> </p> <p align="center"> <img src="./docs/static/images/acc_vs_tokens_1b_7b.png" width="1000"> <br> <em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em> </p>

🔥 News

<!-- - [2024/04/14] 🚀🚀🚀 We release [Gradio demo of Rho-1 Code Interpreter](https://huggingface.co/spaces/zubingou/rho-1), try it out! -->

💡 Introduction

Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.

Selective Lanugage Modeling (SLM)

<p align="center"> <img src="./docs/static/images/example.png" width="1000"> <br> <em>Figure 2: <b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise. <b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens. <b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em> </p> <p align="center"> <img src="./docs/static/images/pipeline.png" width="1000"> <br> <em>Figure 3: <b>The pipeline of Selective Language Modeling.</b> SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training. It involves three steps: (Step 1) Initially, train a reference model on high-quality data. (Step 2) Then, score each token's loss in a corpus using the reference model. (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em> </p> <!-- results: -->

Evaluation Results

Base models (Few-shot CoT):

ModelSizeDataUniq. TokenTrain TokenGSM8KMATHMMLU STEMSAT
1-2B Base Models
Qwen1.51.8B---36.16.831.340.6
Gemma2.0B---18.811.434.450.0
DeepSeekMath1.3B-120B150B23.813.633.156.3
Rho-Math-1B-v0.11.1BOWM14B30B36.215.623.328.1
>= 7B Base Models
Mistral7B--41.211.649.559.4
Minerva540B-39B26B58.833.663.9-
LLemma34BPPile55B50B54.223.054.768.8
InternLM2-Math20B-31B125B65.430.053.171.9
DeepSeekMath7B-120B500B64.134.256.484.4
Rho-Math-7B-v0.17BOWM14B10.5B66.931.054.684.4

Tool-integrated reasoning (Code Interpreter):

ModelSizeSFT DataGSM8kMATHSVAMPASDivMAWPSTabMWPGSM-HardAVG
gpt4-early (pal)--94.251.894.892.697.795.977.686.4
gpt-4-turbo-2024-04-09 (cot)---73.4-----
Open-Source Small Models
MAmmoTH70BMI-260k76.941.882.4-----
ToRA7BToRA-69k68.840.168.273.988.842.454.662.4
ToRA70BToRA-69k84.349.782.786.893.874.067.276.9
DeepSeekMath7BToRA-69k79.852.080.187.193.885.863.177.4
Rho-Math-1B-Interpreter-v0.11BToRA-69k59.440.660.774.288.626.748.156.9
Rho-Math-7B-Interpreter-v0.17BToRA-69k81.351.880.885.594.570.163.175.3

🚀 Quick Start

Evaluation

cd rho-1/math-evaluation-harness

Base model few-shot evaluation:

bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1

SFT model (code-interpreter) evaluation:

bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1

Our reproduced outputs are provided in rho-1/outputs.zip.

🍀 Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

☕️ Citation

If you find this repository helpful, please consider citing our paper:

@misc{lin2024rho1,
      title={Rho-1: Not All Tokens Are What You Need}, 
      author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
      year={2024},
      eprint={2404.07965},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

🌟 Star History

Star History Chart