Home

Awesome

A Fork of Code Generation LM Evaluation Harness

This is a fork of bigcode-evaluation-harness used in our paper Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus . We made it capable with vllm, which enables much faster code generation.

For the basics of big-code-eval, such as installation, please refer to the original readme below.

Release Branches

Evaluating LLMs on the Benchmarks

python ./bigcode-evaluation-harness/main.py \
    --model {model_path} \
    --tasks {task} \
    --metric_output_path {output_json_path} \
    --save_generations_path {generation_path} \
    --save_references_path {reference_path} \
    --vllm_tensor_parallel_size {num_gpus} \
    --limit 500 \
    --max_length_generation 512 \
    --precision bf16 \
    --use_auth_token \
    --temperature 0.8 \
    --do_sample True \
    --n_samples 1 \
    --batch_size 64 \
    --trust_remote_code \
    --allow_code_execution \
    --save_generations \
    --use_vllm

where {task} is specified from the following list:

[
    'humaneval',
    'mbpp',
    'mbppplus',
    'multiple-cpp',
    'multiple-js',
    'multiple-go',
]
<h1 align="center">Original Readme of Code Generation LM Evaluation Harness</h1> <h4 align="center"> <p> <a href="#features">Tasks</a> | <a href="#setup">Usage</a> | <a href="#implementing-new-tasks">Contribution</a> | <a href="#documentation">Documentation</a> | <a href="https://huggingface.co/bigcode">BigCode</a> <p> </h4> <h3 align="center"> <img style="float: middle; padding: 10px 10px 10px 10px;" width="50" height="50" src="https://user-images.githubusercontent.com/44069155/191557209-6219acb8-a766-448c-9bd6-284d22b1e398.png" /></a> </h3>

Features

This is a framework for the evaluation of code generation models. This work is inspired from EleutherAI/lm-evaluation-harness for evaluating language models in general. We welcome contributions to fix issues, enhance features and add new benchmarks. You can find contribution guides in docs/guide.md and CONTRIBUTING.md and more documentation in docs/README.md.

Below are the features and tasks of this framework:

More details about each task can be found in the documentation in docs/README.md.

Setup

git clone https://github.com/bigcode-project/bigcode-evaluation-harness.git
cd bigcode-evaluation-harness

Install torch based on your device type, and install the other packages using:

pip install -e .

To run the DS-1000 benchmark, additional constraints must be resolved.

# python version must be 3.7.10
pip install -e ".[ds1000]" # installs all additional dependencies except PyTorch
# torch==1.12.1 required. Download version with relevant GPU support etc., e.g.,
pip install torch==1.12.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116

# to suppress any tensorflow optimization warnings, 
# precede call to "accelerate launch" with "TF_CPP_MIN_LOG_LEVEL=3"

# on some systems, tensorflow will attempt to allocate all GPU memory
# to its process at import which will raise a CUDA out-of-memory error
# setting "export TF_FORCE_GPU_ALLOW_GROWTH=true" resolves this

Also make sure you have git-lfs installed and are logged in the Hub

huggingface-cli login

We use accelerate to generate code/text in parallel when multiple GPUs are present (multi-GPU mode). You can configure it using:

accelerate config

This evaluation harness can also be used in an evaluation only mode, you can use a Multi-CPU setting. For large models, we recommend specifying the precision of the model using the --precision flag instead of accelerate config to have only one copy of the model in memory. You can also load models in 8bit with the flag --load_in_8bit or 4bit with --load_in_4bit if you have bitsandbytes installed with the required transformers and accelerate versions.

The evaluation part (solutions execution) for MultiPL-E requires extra dependencies for some programming languages, we provide a Dockerfile with all dependencies, see section Docker for more details.

Usage

You can use this evaluation harness to generate text solutions to code benchmarks with your model, to evaluate (and execute) the solutions or to do both. While it is better to use GPUs for the generation, the evaluation only requires CPUs. So it might be beneficial to separate these two steps. By default both generation and evaluation are performed.

For more details on how to evaluate on the tasks, please refer to the documentation in docs/README.md.

Generation and evaluation

Below is an example to generate and evaluate on a task.

accelerate launch  main.py \
  --model <MODEL_NAME> \
  --tasks <TASK_NAME> \
  --limit <NUMBER_PROBLEMS> \
  --max_length_generation <MAX_LENGTH> \
  --temperature <TEMPERATURE> \
  --do_sample True \
  --n_samples 100 \
  --batch_size 10 \
  --precision <PRECISION> \
  --allow_code_execution \
  --save_generations

Some tasks don't require code execution such as codexglue_code_to_text-<LANGUAGE>/codexglue_code_to_text-python-left/conala/concode that use BLEU evaluation. In addition, we generate one candidate solution for each problem in these tasks, so use n_samples=1 and batch_size=1. (Note that batch_size should always be equal or less than n_samples).

Generation only

If you want to generate solutions without executing and evaluating the code, call --generation_only, in addition to the instructions above. This will save the solutions in a json file provided in save_generation_path in the working directory.

This can be useful if you don't want to execute code in the machine you're using for generations for security or efficiency reasons. For instance, you can do the generations on multiple GPUs, but switch to a multiple workers CPU machine or docker container for the execution.

Evaluation only

If you already have the generations in a json file from this evaluation harness and want to evaluate them, specify the path of the generations via the load_generations_path argument. You may need to reconfigure accelerate to use multiple CPUs.

Below is an example, be mind of specifying arguments proper to the task you are evaluating on, and note that model value here only serves for documenting the experiment. Also add --n_samples to specify the number of samples to evaluate per problem (usually the same value used in generation).

accelerate launch  main.py   --tasks mbpp  --allow_code_execution  --load_generations_path generations.json  --model incoder-temperature-08

Docker containers

For safety, we provide a Dockerfiles to do the execution inside a docker container. To do that, first, do the generation on your machine and save them in generations.json for example by adding the flag --generation_only to the command. Then use the Docker image that we provide:

$ docker pull ghcr.io/bigcode-project/evaluation-harness
$ docker tag ghcr.io/bigcode-project/evaluation-harness evaluation-harness

If you want to evaluate on MultiPL-E, we have a different Dockerfile since it requires more dependencies, use:

$ docker pull ghcr.io/bigcode-project/evaluation-harness-multiple
$ docker tag ghcr.io/bigcode-project/evaluation-harness-multiple evaluation-harness-multiple

Building Docker images

If you modify the evaluation harness, you may want to rebuild the docker images.

Here's how to build a docker image for the evaluation harness:

$ sudo make DOCKERFILE=Dockerfile  all

This creates an image called evaluation-harness, and runs a test on it. To skip the test remove all form the command.

For MultiPL-E:

$ sudo make DOCKERFILE=Dockerfile-multiple all

This creates an image called evaluation-harness-multiple.

Evaluating inside a container

Suppose you generated text with the bigcode/santacoder model and saved it in generations_py.json with:

accelerate launch  main.py \
    --model bigcode/santacoder  \
    --tasks multiple-py  \
    --max_length_generation 650 \
    --temperature 0.8   \
    --do_sample True  \
    --n_samples 200  \
    --batch_size 200  \
    --trust_remote_code \
    --generation_only \
    --save_generations \
    --save_generations_path generations_py.json

To run the container (here from image evaluation-harness-multiple) to evaluate on generations_py.json, or another file mount it with -v, specify n_samples and allow code execution with --allow_code_execution (and add the number of problems --limit if it was used during generation):

$ sudo docker run -v $(pwd)/generations_py.json:/app/generations_py.json:ro -it evaluation-harness-multiple python3 main.py \
    --model bigcode/santacoder \
    --tasks multiple-py \
    --load_generations_path /app/generations_py.json \
    --allow_code_execution  \
    --temperature 0.8 \
    --n_samples 200

Implementing new tasks

To implement a new task in this evaluation harness, see the guide in docs/guide. The are also contribution guidelines in this CONTRIBUTING.md

Documentation

We provide documentation for the existing benchmarks and how to run the evaluation in docs/README.md.

Remarks

Acknowledgements

We thank EleutherAI for their work on the lm-evaluation harness from which this repository is inspired.

Cite as

@misc{bigcode-evaluation-harness,
  author       = {Ben Allal, Loubna and
                  Muennighoff, Niklas and
                  Kumar Umapathi, Logesh and
                  Lipkin, Ben and
                  von Werra, Leandro},
  title = {A framework for the evaluation of code generation models},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/bigcode-project/bigcode-evaluation-harness}},
  year = 2022,
}