Home

Awesome

ChatGLM-Math:Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline

基本信息/Introduction

尽管当前语言模型在语言能力表现出色,但其解决数学问题的能力在现实应用中仍然面临挑战。虽然研究者开发了许多策略和数据集以增强LLMs的数学能力,但在部署的LLM系统中同时保持和提高语言和数学能力仍然是一个挑战。在这项工作中,我们定制了自我批评(Self-Critique)流程,该流程在LLM的对齐阶段解决了这一挑战。我们首先从LLM本身训练一个通用的Math-Critique模型以提供反馈信号。然后,我们依次对LLM自己的生成结果进行拒绝采样微调和直接偏好优化。基于ChatGLM3-32B,我们在学术数据集和我们新创建的挑战性数据集MathUserEval上进行了一系列实验。结果显示,我们的流程显著增强了LLM的数学问题解决能力,同时还提高了其语言能力,性能超过了可能是其两倍大的LLM。

Large language models (LLMs) have shown excellent mastering of human language, but still struggle in real-world applications that require mathematical problem-solving. While many strategies and datasets to enhance LLMs' mathematics are developed, it remains a challenge to simultaneously maintain and improve both language and mathematical capabilities in deployed LLM systems. In this work, we tailor the Self-Critique pipeline, which addresses the challenge in the feedback learning stage of LLM alignment. We first train a general Math-Critique model from the LLM itself to provide feedback signals. Then, we sequentially employ rejective fine-tuning and direct preference optimization over the LLM's own generations for data collection. Based on ChatGLM3-32B, we conduct a series of experiments on both academic and our newly created challenging dataset, \textsc{MathUserEval}. Results show that our pipeline significantly enhances the LLM's mathematical problem-solving while still improving its language ability, outperforming LLMs that could be two times larger.

PaperLink: 2404.02893.pdf (arxiv.org)


MathUserEval测试集/MathUserEval Test Set

MathUserEval是一个为真实使用场景设计的测试集,针对用户关心的问题和更具挑战性的数学问题。我们的一些数据来源于大学考试题目,另一些则来源于模拟对话。对于后者,我们指派了一系列标注人员,他们根据日常使用大型模型的经验和观察向我们的系统提出数学相关的问题。

根据收集的数据分布,我们将测试集分为两个主要类别:基础数学问题和高级数学问题,并且有八个子类别。各个类别中的问题数量如下表所示。所有问题都以开放式格式提出。可能的答案包括单个数字、多个数字或数学表达式。所有的Overall分数采用的都是Macro-Average。

MathUserEval is a test set designed for real-world usage scenarios, focusing on questions that users care about and more challenging mathematical problems. Some of our data comes from university exam questions, while others are derived from simulated conversations. For the latter, we assigned a group of annotators who, based on their experience and observations using large models in daily applications, posed math-related questions to our system.

Based on the distribution of collected data, we divided the test set into two main categories: Elementary Math Problems and Advanced Math Problems, with eight subcategories in total. The number of questions in each category is as follows in the table. All questions are presented in an open-ended format. Possible answers include single numbers, multiple numbers, or mathematical expressions. All Overall scores use Macro-Average.

CategorySub-CategorySize
ElementaryCalculate(基础计算)75
Algebra(代数方程)113
Geometry(几何学)81
Trigonometry(三角学)73
AdvancedDiscrete Math(离散数学)45
Probability(概率统计)46
Linear Algebra(线性代数)58
Calculus(微积分)54

MathUserEval 总共包含 545 道高质量数学问题,以及22道补充交叉学科数学问题。每个样本都包含一个高质量,由标注员精心撰写的参考答案,以及在我们的分类体系中对应的类别。数据保存在data/math-user-eval.jsonl中,每一行都以json格式包含一个样本。

MathUserEval contains a total of 545 high-quality mathematics questions, along with 22 additional supplementary interdisciplinary mathematics questions. Each sample includes a high-quality reference answer carefully written by annotators, as well as the corresponding category within our classification system. The data is stored in data/math-user-eval.jsonl, with each line containing a sample in json format.

以下是一个例子:

Here is an example:

{	"question_id": 163,
 	"question": "求函数f(x)=x+x^2+x^3的原函数",
 	"reference": "函数f(x)=x+x^2+x^3的原函数为F(x)=1\\/2*x^2+1\\/3*x^3+1\\/4*x^4+C(C为常数)。\n",
 	"category": "高等数学",
 	"subcategory": "calculus"}

评价方法/Metric

为了有效评估响应的质量,MathUserEval 目前采用 GPT-4-1106-Preview 来分析并随后对响应进行评分。评测方法与AlignBench的逻辑推理类题目保持一致。

To effectively evaluate the quality of responses, MathUserEval currently utilizes GPT-4-1106-Preview to analyze and subsequently score the responses. The evaluation method is consistent with the logic reasoning type questions of AlignBench.


如何使用MathUserEval评测模型/How to use MathUserEval

整个评估过程包含三个步骤:获取待评测模型的生成结果、调用评价模型获取分析和打分,最终计算结果。相应的脚本保存在scripts中,可以修改其中参数之后调用。

The entire evaluation process consists of three steps: obtaining the generation results from the model being evaluated, calling the evaluation model for analysis and scoring, and finally calculating the outcomes. The corresponding scripts are saved in scripts and can be called after modifying the parameters.

  1. 步骤一 获取待评测模型的生成结果

    首先,您需要获得待评测模型的 API 来生成结果,如果是开源模型,您需要自己部署成可以调用获得回复的 API。(此部分不包含在此仓库中)。

    其次,在inference/api_models中实现您自己的 API 调用类,do_nothing类可以作为一个示例。(此类主要用于调用 API,注意 API 类名应与文件名相同)

    第三,修改参数并运行以下脚本以获得待评测模型的生成结果。

    Step One: Obtaining the Generation Results of the Model Being Evaluated

    First, you need to obtain the API of the model being evaluated to generate results. If it is an open-source model, you need to deploy it yourself to call and receive replies. (This part is not included in this repository).

    Secondly, implement your own API calling class in inference/api_models, where the do_nothing class can serve as an example. (This class is mainly used for calling APIs, note the API class name should match the file name).

    Thirdly, modify the parameters and run the following script to obtain the generation results of the model being evaluated.

    MODEL=do_nothing # TODO: Modify the model name (same as your API calling class)
    
    python get_answers.py \
        --model do_nothing \
        --workers 1 \
        --question-file data/math-user-eval.jsonl \
        --save-dir data/model_answer
    

    待评测模型的回复将被保存在data/model_answer中,以备下一步的评测。

    The replies from the model being evaluated will be saved in data/model_answer for the next step of evaluation.

  2. 步骤二 调用评价模型获取分析和打分

    目前我们使用 gpt-4-1106-preview 作为评测模型,之后为了方便中文社区,我们计划以 API 的形式开放 Math-Critique 作为gpt-4-1106-preview 的替代评测模型给研究人员使用。

    首先,在config/mathusereval.json中填写您的 OpenAI API 密钥。

    然后,修改并运行以下脚本以获得评价模型的评测结果。

    Step Two: Calling the Evaluation Model for Analysis and Scoring

    Currently, we use gpt-4-1106-preview as the evaluation model. Later, for the convenience of the Chinese community, we plan to offerMath-Critique as an alternative to ``gpt-4-1106-preview for researchers to use in the form of an API.

    First, fill in your OpenAI API key in config/mathusereval.json.

    Then, modify and run the following script to obtain the evaluation results from the evaluation model.

    MODEL=do_nothing # TODO: Modify the model name (same as your API calling class)
    
    python judge.py \
        --config-path config/mathusereval.json \
        --model-name $MODEL \
        --parallel 1 \
    

    评测结果将保存在data/judgment

    The evaluation results will be saved in data/judgment.

  3. 步骤三 最终计算结果

    运行以下脚本以获取保存在data/judgment中的所有模型的最终结果。

    Step Three: Final Calculation of Results

    Run the following script to obtain the final results of all models saved in data/judgment.

    python show_result.py \
        --input-dir data/judgment \
        --ques-file data/data_release.jsonl \
        --save-file data/results/results.xlsx
    

排行榜/Leaderboard

ModelOverallElementaryAdvanced
Avgalgebracalculategeo.tri.Avgcalculusdiscretelinear.Prob.
GPT-4-0125-Preview5.795.265.047.633.984.596.717.266.625.487.72
GPT-4-1106-Preview5.735.074.967.003.784.716.817.396.965.297.91
GLM-45.114.864.476.563.954.745.436.005.674.266.02
ChatGLM3-32B-SFT-2312 + RFT&DPO4.234.013.885.412.903.994.595.224.763.385.20
GPT-4-06134.143.342.884.763.172.785.335.575.494.266.22
ChatGLM3-32B-SFT-2312 + RFT4.013.863.845.372.573.774.264.724.692.984.89
Qwen-72B-Chat3.873.993.964.813.833.343.674.543.712.843.65
GPT-3.5-Turbo-06133.423.042.814.072.233.264.074.834.383.263.91
ChatGLM3-32B-SFT-23123.393.353.354.512.513.113.444.044.382.413.13
Claude-23.292.632.353.632.202.534.354.564.533.295.28
DeepSeek-Chat-67B3.242.762.214.732.122.303.844.414.822.793.52
Yi-34B-Chat2.642.492.043.612.252.272.872.803.472.033.41

Citation

If you find our work helpful, please consider citing the following papers.

@misc{xu2024chatglmmath,
      title={ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline}, 
      author={Yifan Xu and Xiao Liu and Xinghan Liu and Zhenyu Hou and Yueyan Li and Xiaohan Zhang and Zihan Wang and Aohan Zeng and Zhengxiao Du and Wenyi Zhao and Jie Tang and Yuxiao Dong},
      year={2024},
      eprint={2404.02893},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}