Home

Awesome

<p align="center"> <img src="images/logo.jpg" width="50%"> </p>

CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning <a name="corl"></a>

This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (accepted to NeurIPS 2022). Do check out our blog and poster.

Authors: Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, Steven C.H. Hoi

<p align="center"> <img src="images/ezgif-1-12f629284e.gif" width="100%" /> </p>

Contents:

CodeRL Overview

<p align="center"> <img src="images/coderl_overview.png" width="100%" /> <br> <b>An example program synthesis task (Right)</b>: Each task includes a problem specification in natural language, which often contains example input and output pairs. The expected output is a program that is checked for functional correctness against some unit tests. <b>A high-level overview of our CodeRL framework for program synthesis (Left)</b>: Our CodeRL framework treats pretrained language model (LM) as a stochastic policy, token predictions as actions, and rewards can be estimated based on unit test results of output programs </p> <!--- <p align="center"> <img src="images/coderl_training.png" width="100%" /> <b>Overview of our actor-critic framework to optimize pretrained LMs for program synthesis</b>: We treat the LM as an actor network and sample synthetic samples from this actor. Another neural network is trained as a critic model to evaluate these synthetic samples based on their probabilities of passing unit tests. The returns are estimated based on critic scores and finally factored into the RL objective to finetune the actor LM network using synthetic samples. </p> <p align="center"> <img src="images/coderl_inference.png" width="100%" /> <b>Overview of our Critic Sampling (CS) approach for program synthesis during inference</b>: programs are refined and repaired based on their results on example unit tests of the corresponding problems. Program candidates are sampled by their critic-predicted scores at the token or sequence level. Dotted lines indicate optional processes that apply during program refining or repairing. </p> -->

Installation

The code requires some dependencies as specified in requirements.txt. Please follow the relevant libraries to install or run:

pip install -r requirements.txt

Install the transformers library from the source code (the current source code is developed from the original code of version 4.16.1):

cd transformers
pip install -e .

Datasets

For pretraining, apart from the CodeSearchNet (CSN), we use the Python Github Code Dataset (GCPY). We have compiled public, non-personal data from GitHub consisting of permissively licensed Python code (e.g. “mit”, “apache-2”, “bsd-3-clause”, “bsd-2- 126 clause”, “cc0-1.0”, “unlicense”, “isc”). Please see the paper for more details on pretraining data preprocessing and pretraining.

After pretraining, we finetune/evaluate models on the following major program synthesis benchmarks:

On both benchmarks, we follow the same way of preprocessing data and constructing input/output sequences as the original benchmark papers.

Download and unzip all files into the data folder.

Example Unit Tests

In addition to the original hidden unit tests on APPS, we also utilize the example tests that are often embedded in problem descriptions. After downloading and unzipping APPS, you can run the notebook extract_example_test.ipynb to extract and save example unit tests of APPS test samples into corresponding sample folder e.g. data/APPS/test/0000/. We release the example unit tests that we already extracted using this notebook in the folder data/APPS_test_example_tests/. The average number of example unit tests per sample is 1.9764.

Models

We employ CodeT5 (a family of encoder-decoder language models for code from the paper) as the foundation model in our work.

We pretrained CodeT5 with bigger dataset and improved learning objectives. We release two large-sized CodeT5 checkpoints at Hugging Face: Salesforce/codet5-large and Salesforce/codet5-large-ntp-py.

For finetuning on downstream code generation tasks on APPS, we adopted critic models for RL training. We released the following critic model checkpoints (on Google Cloud Storage):

We released the following finetuned code generation model checkpoints (on Google Cloud Storage):

Download all files into the models folder.

Processes

Generating Programs

We created scripts/generate.sh to generate programs on the APPS benchmark. You can directly run this file by configuring the following parameters:

ParametersDescriptionExample Values
model_pathPath to a trained CodeT5-style modelmodels/codet5_finetuned_codeRL
tokenizer_pathPath to the saved tokenizer for CodeT5 (or path to cache the tokenizer)models/codet5_tokenizer/
test_pathPath to the original test samplesdata/APPS/test/
startstart index of test samples to be generated0
endend index of test samples to be generated5000
num_seqsnumber of total output programs to be generated (for sampling generation)1000
num_seqs_per_iterDepending on the limit of GPU, we can generate multiple rounds, each with this number of output programs50
temptemperature for sampling generation0.6
output_pathPath to save generated programsoutputs/codes/

Other parameters are defined in the file utils/generate_configs.py.

Running the generation script will output programs, each of which is saved into a json file, including data fields code (list of output programs) and prompt (constructed input sequence to the LM model).

Running Unit Tests

Once the programs are generated, they are evaluated against the corresponding unseen unit tests in each problem.

To execute the unit tests and obtain test outcomes, we adapt our code to the official implementation of the APPS benchmark.

We created scripts/run_unit_tests.sh to run unit tests on generated programs on the APPS benchmark. You can directly run this file by configuring the following parameters:

ParametersDescriptionExample Values
code_pathPath to the generated programs to be evaluatedoutputs/codes/
output_pathPath to the save unit test resultsoutputs/test_results/
test_pathPath to the original test samplesdata/APPS/test/
example_testsWhether to evaluate the programs on example unit tests (for filtering, refining programs) or hidden unit tests (for final evaluation)0: use hidden unit tests; 1: use example unit tests
startstart index of test samples to be evaluated0
endend index of test samples to be evaluated5000
threadsDepending on the capacity of the computation resource to run unit tests, we can run unit tests on multiple test samples over multiple threads to speed up the execution time30

Running the script will output test results for each program. For each test sample, the results are saved into a pickle file, including data fields results (list of test outcomes, one of -2 = compile error, -1 = runtime error, False = failed test case, True = passed test case), errors (real compile error trace with details like error type and line numbers), and sols (corresponding programs being evaluated).

Compared to the original implementation from APPS, we adopt one trick which will exit the unit testing loop if a program does not pass any test case. This will speed up the testing process while the final passing rate measures are not affected. Refer to the run_test function in utils/testing_utils.py for more details.

Evaluating Programs

To compute the pass@k metrics, rather than using the APPS evaluation metrics, we follow the official implementation of the HumanEval benchmark (which better measures pass@k normalized by the number of possible k programs)

Training Critic

We can train a critic model as a classifier that predicts the test outcomes of generated samples. For each training sample, we can follow the prior processes (generating programs and running unit tests) to obtain synthetic samples and their annotations of unit test outcomes. On average, we generate 20 programs per training sample (we provided some example generated programs in data/APPS/train/).

Once the programs are tested, we can used their test outcomes as annotations to train a critic model initialized from a LM pretrained on source code data (we used CodeT5-based in this case).

We created scripts/train_critic.sh and scripts/train_critic_deepspeed.sh to train a critic using generated programs. You can directly run this file by configuring the following parameters:

ParametersDescriptionExample Values
batch-size-per-replicaNumber of training samples per GPU device8
grad-acc-stepsGradient accumulation steps1
epochsNumber of training epochs10
lrLearning rate2e-5
save-freqSave model checkpoints after this number of training steps1000
log-freqSave model training losses after this number of training steps10
save_total_limitTotal number of checkpoints to keep eventually (only the latest ones are kept)5
fp16Enable this to training model in 16-bit mode to reduce memory usageN/A
deepspeedIf using deepspeed, set this parameter to the configuration file for deepspeed trainingconfigs/deepspeed_configs.json
dbEnable this to train in debugging mode i.e. with small dummy data split and only 1 data workerN/A

Other parameters are defined in the file utils/train_configs.py.

Running the script will train a critic model as a classifier that receives inputs as a problem description + a generated program and returns an output as one of 4 test outcomes: compile error, runtime error, failed tests, and passed tests. The model checkpoints are saved in a folder under exps/.

Generating Critic Scores

We created scripts/generate_critic_scores.sh to generate critic scores for synthetic programs. We use the same parameters as defined in the generating program process with the following additional parameters:

ParametersDescriptionExample Values
critic_scoresEnable this to run inference on critic models and obtain critic scoresN/A
gt_solutionsEnable this to run inference on ground-truth programs; else, synthetic programs are used by defaultN/A
binary_predictionEnable this to predict in binary classification i.e. passed tests or failed tests onlyN/A

Other parameters are defined in the file utils/generate_configs.py.

Running the generation script will output predictions of the critic model. For each data sample, the prediction is saved into a pkl (pickle) file, including data fields code (list of programs), prompt (constructed input sequence to the critic model), gt_error_type (ground-truth test outcomes), pred_error_type (predicted test outcomes by critic), error_hidden_states (hidden states returned by critic).

Finetuning with Ground-truth Programs

We can finetune any pretraind language model as a program synthesis model that can generate code from problem description in natural language. In our approach, this stage of finetuning is a warmup stage using the ground-truth annotations (from APPS) before a further finetuning stage on synthetic/generated programs.

We created scripts/train_actor.sh and scripts/train_actor_deepspeed.sh which include the parameters as defined above in the critic training process.

Running the script will finetune a pretrained CodeT5-large model that receives a problem description as input and returns a corresponding solution program in Python. The model checkpoints are saved in a folder under exps/.

Finetuning with Generated Programs

We created scripts/train_actor_rl.sh and scripts/train_actor_rl_deepspeed.sh to train pretrained LMs with synthetic generated programs. We use the parameters as defined above in the critic training process with the following additional parameters:

ParametersDescriptionExample Values
model_pathPath to a finetuned model checkpoint e.g. from warm-up trainingmodels/codet5_finetuned_codeRL
relative_returnsEnable this to consider a baseline to compute relative return estimates rather than absolute return restimates in the RL lossN/A

Other parameters are defined in the file utils/train_configs.py.

Running the script will load a finetuned CodeT5-large model and continue to train it with both generated programs as well as ground-truth programs in alternative training steps. The model checkpoints are saved in a folder under exps/.

Generating Programs with Critic Sampling

We will release the implementation details of our critic sampling procedure.

Example Generated Programs

<p align="center"> <img src="images/example_code.png" width="100%" /> The problem is from the APPS benchmark, and the solution programs are generated by CodeT5 and CodeRL. </p>

Citation

If you find the paper or the source code useful to your projects, please cite the following bibtex:

<pre> @inproceedings{ le2022coderl, title={Code{RL}: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning}, author={Hung Le and Yue Wang and Akhilesh Deepak Gotmare and Silvio Savarese and Steven Hoi}, booktitle={Advances in Neural Information Processing Systems}, editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho}, year={2022}, url={https://openreview.net/forum?id=WaGvb7OzySA} } </pre>

License

The code is released under BSD 3-Clause - see LICENSE.txt for details.

This code is developed from other open source projects: including APPS, HumanEval, and transformers. We thank the original contributors of these works for open-sourcing their valuable source codes.