Home

Awesome

code-eval

What

This is a repo I use to run human-eval on code models, adjust as needed. Some scripts were adjusted from wizardcoder repo (process_eval.py). The evaluation code is duplicated in several files, mostly to handle edge cases around model tokenizing and loading (will clean it up).

Results

Table is sorted by pass@1 score.

modelsizepass@1pass@10screenshot
sahil2801/replit-code-instruct-glaive3B63.5%67%instruct-glaive
WizardCoder-15B-V1.015B57%68.9%wizardcoder
bigcode/starcoder15B34.6%48.7%starcoder
openchat/opencoderplus15B27.3%43.9%opencoder
teknium/Replit-v1-CodeInstruct-3B3B25.8%42.6%replit-codeinstruct-v1
teknium/Replit-v2-CodeInstruct-3B3B21.5%31%replit-codeinstruct-v2
replit-code-v1-3b3B17.1%29.8%replit-code-v1
mpt-7b7B15.9%23.7%mpt-7b
xgen-7b-8k-base7B14.9%22.5%xgen-7b-8k-base
openllama-7b-v27B14%23.1%openllama-7b-v2
llama-2-7b7B13.1%21.9%llama-2-7b
llama-7b7B12.1%18.9%llama-7b
mpt-30b30Bpendingpendingpending

FAQ

Why is there a discrepancy on some of the scores between official numbers?

Because it is not obvious or published what prompt or processing the official models used to conduct their evaluation on this benchmark. The goal here is to try and best reproduce those numbers, in many cases it is possible to get very close to the published numbers.

All of the scores here were run independently of any published numbers and are reproducible by cloning the repo and following the setup.

Why do some models have a filter_code post generation step?

Base models can in many cases repeat outputs, breaking the benchmark scores. Instruct models don't have this problem and so you won't see this step, they tend to output a end of sequence token.

Setup

Create python environment

python -m venv env && source env/bin/activate

Install dependencies

pip install -r requirements.txt

Run the eval script

# replace script file name for various models:
# eval_wizard.py
# eval_opencode.py
# eval_mpt.py
# eval_starcoder.py
# eval_replit.py
# eval_replit_glaive.py
# eval_replit_instruct.py

python eval_wizard.py

Process the jsonl file to extract code samples from model completions.

Note: Only wizard & opencoder require this, they return markdown output with code.

# replace args for various models:
# --path results/wizard --out_path results/wizard/eval.jsonl
# --path results/opencode --out_path results/opencode/eval.jsonl

python process_eval.py --path results/wizard --out_path results/wizard/processed.jsonl --add_prompt

Then get the results

# replace args for various models:
# results/wizard/processed.jsonl
# results/starcoder/eval.jsonl
# results/mpt/eval.jsonl
# results/opencode/processed.jsonl
# results/replit_instruct/eval.jsonl
# results/replit_glaive/eval.jsonl
# results/replit/eval.jsonl

evaluate_functional_correctness results/wizard/processed.jsonl