Home

Awesome

PyCodeGPT

A pre-trained GPT model for Python code completion and generation

What is it?

PyCodeGPT is efficient and effective GPT-Neo-based model for python code generation task, which is similar to OpenAI Codex, Github Copliot, CodeParrot, AlphaCode.

Training Data

Due to the small size of public released dataset, we proposed to collect data from GitHub from scratch. We first crawled 1.2M python-related repositories hosted by GitHub. Then, we used these repository URLs to download all contents of each repository from GitHub. After that, we got 60M raw python files under 1MB with a total size of 330GB. Finally, we carefully designed various strategies of data cleaning to get about 96GB data for training. Please refer to the following table for the details.

ModelRepositoriesSize and file after filtering
CodeParrot0.56M12GB (compressed), 5.4M
Codex54M159GB
PyCodeGPT1.2M96GB, 13M

Pretrained models

we aims to train median-large pre-trained models (model size with 110M) based on GPT-Neo:

PyCodeGPT-110M is available on HuggingFace.

Evaluation

  1. Install requirements (python 3.7)
$ pip install -r requirements.txt
  1. Install HumanEval
$ git clone https://github.com/openai/human-eval
$ pip install -e human-eval
  1. Run eval_human_eval.py to generate programs

    • Arguments

      • model_name_or_path : Path to the model checkpoint to be evaluated.
      • output_dir : Path to save generated programs
      • num_completions : The number of program to be generated
      • temperature : Temperature for sampling
      • top_p : p value for nucleus sampling
      • max_new_tokens : Maximum number of generated token
    • Example usage

      $ python eval_human_eval.py \
      	--model_name_or_path PyCodeGPT-110M/ \
      	--output_dir results/ \
      	--num_completions 100 \
      	--temperature 0.2 \
      	--top_p 0.95 \
      	--max_new_tokens 100 \
      	--gpu_device 0
      
  2. Evaluate functional correctness

    $ evaluate_functional_correctness <samples_path>
    # Example
    $ evaluate_functional_correctness results/human_eval.t0.2.p0.95.l100.n100.samples.jsonl
    

Here's our evaluation result on HumanEval dataset:

Note: our model can have a comparable accuracy with Codex of similar model size.

ModelPass@1Pass@10Pass@100
PyCodeGPT-110M8.32%13.53%18.3%
GPT-Neo 125M0.75%1.88%2.97%
GPT-Neo 1.3B4.97%7.47%16.3%
GPT-Neo 2.7B6.41%11.27%21.37%
GPT-J 6B11.62%15.74%27.74%
TabNine2.58%4.35%7.59%
CodeParrot 110M3.80%6.57%12.78%
CodeParrot 1.5B3.58%8.03%14.96%
Codex 12M2.00%3.62%8.58%
Codex 25M3.21%7.1%12.89%
Codex 42M5.06%8.8%15.55%
Codex 85M8.22%12.81%22.4%
Codex 300M13.17%20.37%36.27%
Codex 679M16.22%25.7%40.95%
Codex 2.5B21.36%35.42%59.5%
Codex 12B28.81%46.81%72.31%
Pretrained Decoder-only 13M (AlphaCode)1.5%3.6%8.6%
Pretrained Decoder-only 29M (AlphaCode)3.4%5.8%11.2%
Pretrained Decoder-only 55M (AlphaCode)4.2%8.2%16.9%
Pretrained Decoder-only 89M (AlphaCode)4.3%12.2%20.0%
Pretrained Decoder-only 302M (AlphaCode)11.6%18.8%31.8%
Pretrained Decoder-only 685M (AlphaCode)14.2%24.4%38.8%
Pretrained Decoder-only 1.1B (AlphaCode)17.1%28.2%45.3%
PolyCoder 160M2.13%3.35%4.88%
PolyCoder 400M2.96%5.29%11.59%
PolyCoder 2.7B5.59%9.84%17.68%

Reference

If you want to use the models, you need to cite our following paper:

@inproceedings{CERT,
  title={{CERT}: Continual Pre-training on Sketches for Library-oriented Code Generation},
  author={Zan, Daoguang and Chen, Bei and Yang, Dejian and Lin, Zeqi and Kim, Minsu and Guan, Bei and Wang, Yongji and Chen, Weizhu and Lou, Jian-Guang},
  booktitle={The 2022 International Joint Conference on Artificial Intelligence},
  year={2022}
}