Home

Awesome

CodeFuseEval: Multi-tasking Evaluation Benchmark for Code Large Language Model

<p align="center"> <img src="https://github.com/codefuse-ai/MFTCoder/blob/main/assets/github-codefuse-logo-update.jpg" width="50%" /> </p> <div align="center"> <p> <a href="README_CN.md" target="_blank">简体中文</a>| <a href="https://modelscope.cn/datasets/codefuse-ai/CodeFuseEval/summary" target="_blank">CodeFuseEval on ModelScope</a>| <a href="https://huggingface.co/datasets/codefuse-ai/CodeFuseEval" target="_blank">CodeFuseEval on Hugging Face</a> </p> <p> <img src="https://img.shields.io/github/stars/codefuse-ai/codefuse-evaluation?style=social" alt="GitHub stars"/> <img src="https://img.shields.io/github/forks/codefuse-ai/codefuse-evaluation?style=social" alt="GitHub forks"/> <img src="https://img.shields.io/github/issues/codefuse-ai/codefuse-evaluation" alt="GitHub issues"/> </p> </div>

CodeFuseEval is a Code Generation benchmark that combines the multi-tasking scenarios of CodeFuse Model with the benchmarks of HumanEval-x and MBPP. This benchmark is designed to evaluate the performance of models in various multi-tasking tasks, including code completion, code generation from natural language, test case generation, cross-language code translation, and code generation from Chinese commands, among others.Continuously open, stay tuned !

<p> <img src="./figures/EnglishIntroduction.png" alt="English Introduction"/> </p>

Generation environment:

CodeFuse-13B: Python 3.8 or above,PyTorch 1.12 or above, with a recommendation for 2.0 or above, Transformers 4.24.0 or above ,CUDA 11.4 or above (for GPU users and flash-attention users, this option should be considered).

CodeFuse-CodeLlama-34B:python>=3.8,pytorch>=2.0.0,transformers==4.32.0,Sentencepiece,CUDA 11.

Evaluation Environment

The evaluation of the generated codes involves compiling and running in multiple programming languages. The versions of the programming language environments and packages we use are as follows:

DependencyVersion
Python3.10.9
JDK18.0.2.1
Node.js16.14.0
js-md50.7.3
C++11
g++7.5.0
Boost1.75.0
OpenSSL3.0.0
go1.18.4
cargo1.71.1

In order to save everyone the trouble of setting up the environments for these languages, we create a Docker image with the required environments and codefuseEval.

docker pull registry.cn-hangzhou.aliyuncs.com/codefuse/codefuseeval:latest

If you are familiar with docker, you can build the image from codefuseEval/docker/Dockerfile or configure the Dockerfile as you like it:

cd codefuseEval/docker
docker build [OPTIONS] .

After obtaining the image, you can build a container using the following command:

docker run -it --gpus all --mount type=bind,source=<LOCAL PATH>,target=<PATH IN CONTAINER> [OPTIONS] <IMAGE NAME:TAG>

Check result Command:

We provide the script to check the result for provided code LLMs. Please use following scripts to check corresponding results and the environment .

bash codefuseEval/script/check_reference.sh codefuseEval/result/CodeFuse-CodeLlama-34B/humaneval_result_python.jsonl humaneval_python
bash codefuseEval/script/check_reference.sh codefuseEval/result/CodeFuse-13B/humaneval_result_python.jsonl humaneval_python 

How to use CodeFuseEval

  1. Download the model and update current model infomation in ckpt_config.json. Mainly update 「path」parameter in corresponding model and version.
  2. Run following generation comand to generate result.
bash codefuseEval/script/generation.sh MODELNAME MODELVERSION EVALDATASET OUTFILE 

eg:
bash codefuseEval/script/generation.sh CodeFuse-13B v1 humaneval_python result/test.jsonl
  1. Run following evaluation command to evaluate the generated result for corresponding model and version.
bash codefuseEval/script/evaluation.sh <RESULT_FILE> <METRIC> <PROBLEM_FILE>
eg: 
bash codefuseEval/script/evaluation.sh codefuseEval/result/test.jsonl pass@k humaneval_python

Evaluation

We recommend evaluating in the provided image. To evaluate the generated samples, save generated codes in the following JSON list format:

{"task_id": "../..", "generation: "..."}
{"task_id": "../..", "generation: "..."}
...

and evaluate them using the following script under the root directory of the repository (<font color='red'>please execute with caution, the generated codes might have unexpected behaviours though with very low possibility. See the warnings in execution.py and uncomment the execution lines at your own risk</font>):

Evaluation Data

Data are stored in codefuseEval/data, using JSON list format. We first integrated humaneval-X dataset.

Evaluation Metrics

In addition to the unbiased pass@k indicators currently provided in Codex, we will also integrate the relevant indicators of huggingface open source with CodeBLEU for integration. The main indicators currently recommended for users are as follows:

For other related metrics, you can check the code of the metric or the evaluation code to meet your requirements.

At the same time, we supplemented the indicators of the total and average generation time of the model for the dataset total_time_cost and Average time cost

Output during each generation, making it convenient for users to measure the generation performance of the model in the same environment. This indicator is passive output, and it will be output every time it is generated.

Evaluation Command:

bash codefuseEval/script/evaluation.sh <RESULT_FILE> <METRIC> <PROBLEM_FILE> <TEST_GROUDTRUTH>
eg: 
bash codefuseEval/script/evaluation.sh codefuseEval/result/test.jsonl pass@k humaneval_python

At the same time, we currently provide the following flags, which can directly bring the sample answers in the test data set as generated answers for testing.

When TEST_GROUDTRUTH is True, the self-test mode is turned on, PROBLEM_FILE will be read, and the sample answer will be substituted as the generated answer for testing.

When TEST_GROUDTRUTH is False, open the evaluation mode, read RESULT_FILE and PROBLEM_FILE, and substitute the generated answer for testing.

More Infomation

Evaluation self model and dataset

  1. Registry your evaluate dataset.
  1. Registry your evaluate model.

We designed an infrastructure called Processor. Its main purpose is to handle the differences between different models. It mainly needs to complete three abstract functions:

You can extend the BaseProcessor in codefuseEval/processor/base.py and implement above functions

{
  "CodeFuse-13B": {     //model name
    "v1": {             //model version
      "path": "/mnt/model/CodeFuse13B-evol-instruction-4K/",       // model path
      "processor_class": "codefuseEval.process.codefuse13b.Codefuse13BProcessor",  // model processor
      "tokenizer": {                 // tokenizer params to token input string.
        "truncation": true,
        "padding": true,
        "max_length": 600
      },
      "generation_config": {        //generation config params. 
        "greedy": {                 //If JsonObject, it is a decode mode, you can set 「decode_mode」param to load params defined in the decode_mode.
          "do_sample": false,
          "num_beams": 1,
          "max_new_tokens": 512
        },
        "beams": {
          "do_sample": false,
          "num_beams": 5,
          "max_new_tokens": 600,
          "num_return_sequences": 1
        },
        "dosample": {
          "da_sample": true
        },
        "temperature": 0.2,          //If not JsonObject, it is a default param, we will set in generation_config default. You can cover param in decode_mode same name param.
        "max_new_tokens": 600,
        "num_return_sequences": 1,
        "top_p": 0.9,
        "num_beams": 1,
        "do_sample": true         
      },
      "batch_size": 1,            // batch size for generate
      "sample_num": 1,            // The number of samples generated by a single piece of data
      "decode_mode": "beams"      // choose decode mode defined in generation_config
    }
  }

Check dataset Command:

To check whether the reference values provided by the evaluation data set are correct, we provide the following command to check the dataset.

CodeCompletion

bash codefuseEval/script/check_dataset.sh humaneval_python

bash codefuseEval/script/check_dataset.sh humaneval_java

bash codefuseEval/script/check_dataset.sh humaneval_js

bash codefuseEval/script/check_dataset.sh humaneval_rust

bash codefuseEval/script/check_dataset.sh humaneval_go

bash codefuseEval/script/check_dataset.sh humaneval_cpp

NL2Code

bash codefuseEval/script/check_dataset.sh mbpp

CodeTrans

bash codefuseEval/script/check_dataset.sh codeTrans_python_to_java

bash codefuseEval/script/check_dataset.sh codeTrans_python_to_cpp

bash codefuseEval/script/check_dataset.sh codeTrans_cpp_to_java

bash codefuseEval/script/check_dataset.sh codeTrans_cpp_to_python

bash codefuseEval/script/check_dataset.sh codeTrans_java_to_python

bash codefuseEval/script/check_dataset.sh codeTrans_java_to_cpp

CodeScience

bash codefuseEval/script/check_dataset.sh codeCompletion_matplotlib

bash codefuseEval/script/check_dataset.sh codeCompletion_numpy

bash codefuseEval/script/check_dataset.sh codeCompletion_pandas

bash codefuseEval/script/check_dataset.sh codeCompletion_pytorch

bash codefuseEval/script/check_dataset.sh codeCompletion_scipy

bash codefuseEval/script/check_dataset.sh codeCompletion_sklearn

bash codefuseEval/script/check_dataset.sh codeCompletion_tensorflow

bash codefuseEval/script/check_dataset.sh codeInsertion_matplotlib

bash codefuseEval/script/check_dataset.sh codeInsertion_numpy

bash codefuseEval/script/check_dataset.sh codeInsertion_pandas

bash codefuseEval/script/check_dataset.sh codeInsertion_pytorch

bash codefuseEval/script/check_dataset.sh codeInsertion_scipy

bash codefuseEval/script/check_dataset.sh codeInsertion_sklearn

bash codefuseEval/script/check_dataset.sh codeInsertion_tensorflow