Awesome
NL2Type 🔵
NL2Type: Inferring JavaScript Function Types from Natural Language Information
Using Docker for replication
Install docker
Install docker for your environment. We have tested on docker version 18.09.1, build 4c52b90 on Ubuntu 18.04. One may also follow the official install instructions from here: https://docs.docker.com/install
Download data
Download some required data from this link, unzip and place in the home directory. We refer this unzipped folder as the data folder throughout this documentation. You may also put the data directory anywhere but while running the docker containers, you need to provide the absolute path of this directory. The following instructions are written with the assumption that the unzipped data folder is placed in the home directory. Please adapt the absolute path of the data folder according to your environment and USER_NAME.
The template command for running the containers is
docker pull jibesh/nl2type:TAG
docker run -v PATH_TO_DATA_DIR:/data jibesh/nl2type:TAG
PS: You might need to prepend sudo to each of the following commands
Download the containers
- To replicate the results from Table 1 of our paper, run the following commands:
# Pull the container
docker pull jibesh/nl2type:table1
# Execute the script
docker run -v /home/USER_NAME/data:/data jibesh/nl2type:table1
Results are printed on the terminal which corresponds to the first row of Table 1 (Approach: NL2Type) of our paper. The final output of this command is placed in data/paper/results/results.csv
- To replicate the results for the model trained only on names and not on the comments is
docker pull jibesh/nl2type:table1_no_comments
docker run -v /home/USER_NAME/data:/data jibesh/nl2type:table1_no_comments
Results are printed on the terminal which corresponds to the second row of Table 1 (Approach: NL2Type w/o comments) of our paper. The final output of this command is placed in data/paper/results/predictions_paper_no_comments.csv
- To use the model to make predictions using the same test data as used in the paper, run the following commands. The test data is placed in data/paper/raw_csv/test.csv
docker pull jibesh/nl2type:from_vecs
docker run -v /home/USER_NAME/data:/data jibesh/nl2type:from_vecs
The final output of this command is placed in data/paper/results_new_enriched.csv. This script first vectorizes the data in this test file and uses the same model as used in the paper to make predictions. This corresponds to steps 3, 4 and 5 in Figure 2 in the paper.
- Finally, if you want to use our tool using your own set of JavaScript files, you may run the following commands. Please ensure that the used JavaScript files have some JSDoc annotations since these will be used for extracting the natural language information used for training and testing the model. Additionally, there need be enough JavaScript files to learn from and providing only a few examples might not give the desired output. The set of JavaScript files must be placed in the data folder to the path → /data/demo/files. To use the JavaScript files used by us, you may download some of them from this link. Next run the following commands.
docker pull jibesh/nl2type:demo
docker run -v /home/USER_NAME/data:/data jibesh/nl2type:demo
The predictions for the given files will be data/demo/results/results.csv
If you do not want to use docker please use the following setup setup and instructions to run our tool.
Requirements and assumptions
- python 2.7
- pip2 (Tested using version 9.0.1 for python 2)
- Tested on Ubuntu 18.04.1 LTS
Download data
-
Download some results and required data from this link, place it in current directory and unzip it.
-
To download the files used for training and testing the model used in the paper, use this link. The files used for training the model are in "training_files" and the files used in testing are in "testing_files"
Setup steps
- Install the dependencies using the following command
pip2 install --upgrade -r requirements.txt
- Install Node.js and also download a required Node.js package using the following command
sudo apt-get install -y nodejs
npm install -g jsdoc
Results and replication
-
The model used in the paper is in models/model.h5
-
The main results file from the paper is data/paper/results/results.csv. The following commands calculate the results in Table 1 in the paper from this results file:
python2 scripts/runner.py --config scripts/configs/stats_paper.json
The results for the model trained only on names and not on comments is data/paper/results/predictions_paper_no_comments.csv. The following commands calculate the results for no comments in Table 1 in the paper from this file:
python2 scripts/runner.py --config scripts/configs/stats_paper_no_comments.json
In the results file, the column "original" contains the actual type of the datapoint, the column "top_5_prediction" refer to the top 5 most likely predictions as explained in the paper, separated by the token "%". The column "datapoint_type" indicated whether the point is a function or parameter, with the value 0 for a function and 1 for a parameter.
- To use the model to make predictions using the same test data as used in the paper, run the following command:
python2 scripts/runner.py --config scripts/configs/from_vecs.json
This makes predictions for the points present in the file data/paper/raw_csv/test.csv and the generated results file will be data/paper/results_new_enriched.csv
- The results from the inconsistency analysis are in data/paper/results/inconsistency_analysis_paper.csv
Demo
- To make predictions on some Javascript files of your own choosing, using the model used in the paper, place some Javascript files in data/demo/files and then run the following command:
python2 scripts/runner.py --config scripts/configs/demo.json
Please ensure that the used JavaScript files have some JSDoc annotations since these will be used for extracting the natural language information used for training and testing the model. The predictions for the files will be data/demo/results/results.csv