Home

Awesome

DOI

HyDiff: Hybrid Differential Software Analysis

This repository provides the tool and the evaluation subjects for the paper HyDiff: Hybrid Differential Software Analysis accepted for the technical track at ICSE'2020. A pre-print of the paper is available here.

Authors: Yannic Noller, Corina S. Pasareanu, Marcel Böhme, Youcheng Sun, Hoang Lam Nguyen, and Lars Grunske.

The repository includes:

A pre-built version of HyDiff is also available as Docker image:

docker pull yannicnoller/hydiff
docker run -it --rm yannicnoller/hydiff

Tool

HyDiff's technical framework is built on top of Badger, DifFuzz, and the Symbolic PathFinder. We provide a complete snapshot of all tools and our extensions.

Requirements

Folder Structure

The folder tool contains 2 subfolders: fuzzing and symbolicexecution, representing the both components of HyDiff.

fuzzing

symbolicexecution

How to install the tool and run our evaluation

Be aware that the instructions have been tested for Unix systems only.

  1. First you need to build the tool and the subjects. We provide a script setup.sh to simply build everything. Note: the script may override an existing site.properties file, which is required for JPF/SPF.

  2. Test the installation: the best way to test the installation is to execute the evaluation of our example program (cf. Listing 1 in our paper). You can execute the script run_example.sh. As it is, it will run each analysis (just differential fuzzing, just differential symbolic execution, and the hybrid analysis) once. The values presented in our paper in Section 2.2 are averaged over 30 runs. In order to perform 30 runs each, you can easily adapt the script, but for some first test runs you can leave it as it is. The script should produce three folders:

    • experiments/subjects/example/fuzzer-out-1: results for differential fuzzing
    • experiments/subjects/example/symexe-out-1: results for differential symbolic execution
    • experiments/subjects/example/hydiff-out-1: results for HyDiff (hybrid combination) It will also produce three csv files with the summarized statistics for each experiment:
    • experiments/subjects/example/fuzzer-out-results-n=1-t=600-s=30.csv
    • experiments/subjects/example/symexe-out-results-n=1-t=600-s=30.csv
    • experiments/subjects/example/hydiff-out-results-n=1-t=600-s=30-d=0.csv
  3. After finishing the building process and testing the installation, you can use the provided run scripts (experiments/scripts) to replay HyDiff's evaluation or to perform your own differential analysis. HyDiff's evaluation contains three types of differential analysis. For each of them you will find a separate run script:

In the beginning of each run script you can define the experiment parameters:

Each run script first executes differential fuzzing, then differential symbolic execution and then the hybrid analysis. Please adapt our scripts to perform your own analysis.

For each subject, analysis_type, and experiment repetition i the scripts will produce folders like: experiments/subjects/<subject>/<analysis_type>-out-<i>, and will summarize the experiments in csv files like: experiments/subjects/<subject>/<analysis_type>-out-results-n=<N>-t=<T>-s=<S>-d=<D>.csv.

Complete Evaluation Reproduction

In order to reproduce our evaluation completely, you need to run the three mentioned run scripts. They include the generation of all statistics. Be aware that the mere runtime of all analysis parts is more than 53 days because of the high runtimes and number of repetitions. So it might be worthwhile to run it only for some specific subjects or to run the analysis on different machines in parallel or to modify the runtime or to reduce the number of repetitions. Feel free to adjust the script or reuse it for your own purpose.

Statistics

As mentioned earlier, the statistics will be automatically generated by our run script, which execute the python scripts from the scripts folder to aggregate the several experiment runs. They will generate csv files with the information about the average result values.

For the regression analysis and the DNN analysis we use the scripts:

For the side-channel analysis we use the scripts:

All csv files for our experiments are included in experiments/results.

Feel free to adapt these evaluation scripts for your own purpose.

Maintainers

License

This project is licensed under the MIT License - see the LICENSE file for details