Home

Awesome

paper

CodeApex is a bilingual programming evaluation benchmark for Large Language Models. It consists two basic programming tasks: programming comprehension and code generation. Programming Comprehension Test consists of 250 multiple choice quesitions, including conceptual understanding, commonsense reasoning, and multi-hop reasoning three question categories. Code generation Task consists of 476 C++ based algorithm problems, covering common algorithm knowledge points like binary search, depth-firsts-search and so on. In the future, CodeApex will publish other code-related functional tests, such as code correction. This is an evaluation repository for the Paper "CodeApex: A Bilingual Programming Evaluation Benchmark for Large Language Models".

<img src="figures/intro.png" alt="Overview diagram of CodeApex benchamark." style="zoom:40%;" />

News

Table of Contents

Leaderboard

The public leaderboard is presented in Leaderboard.

Programming Comprehension

ModelAO-ENCOT-ENAO-ZHCOT-ZH
ChatGLM-6B0.29600.26400.29600.2880
ChatGLM-6B*0.32000.27600.29600.2480
ChatGLM2-6B0.29870.29600.29870.3240
ChatGLM2-6B*0.30400.29600.30400.3360
MOSS-16B*0.31200.23600.31200.2240
Chinese-Alpaca-7B0.29470.26400.28800.2680
Chinese-Alpaca-7B*0.28400.28000.28400.2640
Chinese-Alpaca-plus-7B0.29870.28800.26530.3240
Chinese-Alpaca-plus-7B*0.25730.24800.28530.2400
Chinese-Alpaca-13B0.27330.25200.27330.2640
Chinese-Alpaca-13B*0.25470.21600.26270.2640
Chinese-Alpaca-plus-13B0.28270.25600.28270.2600
Chinese-Alpaca-plus-13B*0.29730.25600.25730.2920
BELLE-7B-1M0.30800.21200.29470.2720
BELLE-7B-1M*0.30400.18400.30130.2600
BELLE-7B-2M0.26130.20400.27600.2240
BELLE-7B-2M*0.24000.18800.24130.2400
BELLE-LLaMA-7B-0.6M0.28800.23200.30530.2760
BELLE-LLaMA-7B-0.6M*0.30000.26000.30000.3200
BELLE-LLaMA-7B-2M0.26800.18800.23870.2640
BELLE-LLaMA-7B-2M*0.28400.18800.28400.2800
BELLE-LLaMA-13B-2M0.28400.21200.28400.2560
BELLE-LLaMA-13B-2M*0.26930.21200.28270.2600
InternLM-Chat-7B0.37330.31600.37200.2880
Baichuan-7B0.31470.10000.31470.0720
EduChat-base-002-7B*0.31470.23600.24800.2480
EduChat-base-002-13B*0.32670.26800.30130.2800
EduChat-sft-002-7B*0.29200.25600.25600.2520
CodeT5-plus-16B0.2640-0.2640-
CodeT5-plus-16B*0.2467-0.3160-
CodeT5-plus-6B0.3173-0.2693-
CodeT5-plus-6B*0.3040-0.2573-
GPT-3.5-turbo0.48930.47400.48930.5260
GPT-3.5-turbo*0.44130.48530.50530.5187

Code Generation

ZHZHZHZHENENENEN
ModelCompilableAC@1AC@allAC RateCompilableAC@1AC@allAC Rate
GPT-3.5-turbo0.91180.66600.48530.56440.89290.65970.48320.5606
MOSS-16B0.52310.24580.14920.18790.60920.26260.15130.2002
vicuna-13B0.79830.30460.14920.20450.78150.29830.12180.1861
ChatGLM-6B0.56930.21430.09240.13710.64290.20800.06930.1203
ChatGLM2-6B0.53990.21430.11970.15600.53990.18910.08190.1243
Chinese-alpaca-plus-13B0.71640.27730.13870.18860.70170.28780.13450.1963
BELLE-7B-1M0.42440.16390.06510.09540.52730.20380.06510.1161
BELLE-LLaMA-13B-2M0.51050.19960.09030.12830.53570.22270.08610.1434
WizardCoder-15B0.86340.44960.27730.34680.83610.43910.27520.3444
Starcoder-self-instruct0.48530.22270.13660.16790.67650.33820.18910.2494
Baichuan-Chat-13B0.62180.31300.17860.23030.76050.33190.16810.2310
InternLM-chat-7B0.42650.15130.09240.11280.76260.30250.15970.2126

Data

Test data are published in this repo.

First, clone the this repo:

git clone https://github.com/SJTU-LIT/ceval.git

The data is in ProgrammingComprehension/testcases and CodeGeneration/data, whose format is json.

The format of programming comprehension is:

[
    {
        "question": "If there is a definition: char str[] = {'h','1','2','0','a','b'}; const char *p = str; Which of the following statements is correct:____\n",
        "A": "p[2] = 's'",
        "B": "strcpy(str,\"123456\")",
        "C": "strcpy(p,\"abc\")",
        "D": "strcpy(str,\"abc\")",
        "category": 1,
        "id": 0
    },
    ...
]

The format of code generation is:

[
   {
        "id": 1964,
        "problem_description": "Given 4 positive integers a, b, c, d, calculate the value of the expression (a*b*c)%d, where % represents the modulo operation.",
        "function_declaration": "int calculate_remainder(int a, int b, int c, int d)",
        "code_context": "#include<cstdio>\n\n// function start\n\n// function end\n\nint main(){\n    int a,b,c,d;\n    scanf(\"%d%d%d%d\",&a,&b,&c,&d);\n    // calling start\n    int result = calculate_remainder(a, b, c, d);\n    // calling end\n    printf(\"%d\", result);\n}",
        "example": "[{\"input\": \"2 3 4 5\", \"output\": \"4\"}]",
        "time_limit": 1000,
        "memory_limit": 256
    },
    ...
]

How to Evaluation on CodeApex

Programming Comprehension

You can evaluate your model's response on our website. Users should be responsible for the correctness and compliance of their inputs. The format of answer generated by LLM is json, and the json file is divided into three dictionaries in order, representing the answers for CU, CR, and MCR, with each dictionary's answers sorted by ID within the dictionary. We provide an example example.json. Your input should be a npy file containing your answer to the testcases, and run the deal_answer.py to generate the json file for evaluation.

How to submit?

Code Generation

Online evaluation of code generation tasks is being set up. Coming soon.

Citation

Please cite using the following bibtex entry:

@misc{fu2023codeapex,
      title={CodeApex: A Bilingual Programming Evaluation Benchmark for Large Language Models}, 
      author = {Fu, Lingyue and Chai, Huacan and Luo, Shuang and Du, Kounianhua and Zhang, Weiming and Fan, Longteng and Lei, Jiayi and Rui, Renting and Lin, Jianghao and Fang, Yuchen and Liu, Yifan and Wang, Jingkuan and Qi, Siyuan and Zhang, Kangning and Zhang, Weinan and Yu, Yong}, 
      year={2023},
      eprint={2309.01940},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}