Home

Awesome

SieveFuzz

Code repository for the ACSAC '22 paper: One Fuzz Doesn’t Fit All: Optimizing Directed Fuzzing via Target-tailored Program State Restriction.

The instructions pertaining to the artifact used for the experiments presented in our paper are present in artifact/.

We present below the instructions to pull a docker image with a standalone release as well as instructions on how to run it on a sample target.

Run a sample target with SieveFuzz

docker pull prashast94/sievefuzz:standalone
docker run --it prashast94/sievefuzz:standalone /bin/bash
cd /root/SieveFuzz/eval
# Get the target source code
./get_sample_target.sh
# Create the bitcode file
./prep_target.sh tidy bitcode 
# Create the SieveFuzz variant
./prep_target.sh tidy sievefuzz
# Run the beanstalk job deployment server
beanstalkd &
# Flush the job queue thrice to ensure that there are no stale jobs in the queue
python3 create_fuzz_script.py -c sanitycheck.config -n 15 --flush  
python3 create_fuzz_script.py -c sanitycheck.config -n 15 --flush  
python3 create_fuzz_script.py -c sanitycheck.config -n 15 --flush  

# Put the jobs in the queue
python3 create_fuzz_script.py -c sanitycheck.config -n 15 --put

# Get the jobs in the queue. 
# WARNING: `-n` represents the number of cores that # are available for #
# fuzzing. We recommend setting this number to roughly 95% of the available #
# cores.  So if you have 16 cores, we recommend using 15.
# Do not put `-n` greater than the number of cores that you may have available.
python3 create_fuzz_script.py -c sanitycheck.config -n 15 --get  
cd /root/sievefuzz/eval
./sanitycheck_run.sh

Setting up a new target to test with SieveFuzz

We provide a set of helper scripts to build the target with sievefuzz instrumentation and also run the fuzzer with the static analysis module

cd /root/SieveFuzz/eval
# Create the bitcode file
./prep_target.sh newtarget bitcode 
# Create the SieveFuzz variant
./prep_target.sh newtarget sievefuzz
# XXX: Ensure the two output numbers at the end of the above command are within
# a delta of 1.  This is to sanity-check unique numeric ID's were assigned to each function 
# during instrumentation phase
 {
        "mode": "sievefuzz",
        "static": "/root/sievefuzz/third_party/SVF/Release-build/bin/svf-ex",
        "get_indirect": "true",
        "fn_indices": "/root/sievefuzz/benchmarks/out_newtarget/sievefuzz/fn_indices.txt", <- Point this to newtarget
        "bitcode": "/root/sievefuzz/benchmarks/out_newtarget/BITCODE/bin.bc", <- Point this to the location of the bitcode
        "tagdir": "/root/sievefuzz/results/tidy/sievefuzz", <- Location where the fuzzing campaign results are put
        "dump_stats": "true",
        "function": "prvTidyInsertedToken", <- Specify the target function inside the fuzz target
        "input": "/root/sievefuzz/eval/data/seeds/simple", <- The location of the initial seed to be used
        "target": "/root/sievefuzz/benchmarks/out_newtarget/sievefuzz/bin", <- The location of the sievefuzz-instrumented target 
        "cmdline": "", <- Specify the parameters with which the fuzz target is to be run. If no arguments are specified the fuzz input is passed through stdin
        "output": "/root/sievefuzz/results/tidy/sievefuzz/output", <- The prefix for the output dirs. This means that all the output fuzz campaign folder will be of the form "output_XXX" where XXX is an integer ID
        "fuzztimeout": "300", <- The max time for which the campaign is to be run
        "fuzzer": "/root/sievefuzz/third_party/sievefuzz/afl-fuzz",
        "jobcount": 1, # The number of fuzzing campaigns to run
        "start_port": 6200, <- The port to be used to deploy the static analysis server. For each job a unique port is used.
        "afl_margs": "", <- Any additional arguments to run AFL with are specified heer 
        "mem_limit": "none",
        "env": {
            "AFL_NO_UI": "1"
         }
  }
# Run the beanstalk job deployment server
beanstalkd &
# Flush the job queue thrice to ensure that there are no stale jobs in the queue
python3 create_fuzz_script.py -c newtarget.config -n 15 --flush  
python3 create_fuzz_script.py -c newtarget.config -n 15 --flush  
python3 create_fuzz_script.py -c newtarget.config -n 15 --flush  

# Put the jobs in the queue
python3 create_fuzz_script.py -c newtarget.config -n 15 --put

# Get the jobs in the queue. 
# WARNING: `-n` represents the number of cores that # are available for #
# fuzzing. We recommend setting this number to roughly 95% of the available #
# cores.  So if you have 16 cores, we recommend using 15.
# Do not put `-n` greater than the number of cores that you may have available.
python3 create_fuzz_script.py -c newtarget.config -n 15 --get  
# Put the jobs in the queue
python3 create_fuzz_script.py -c sanitycheck.config -n 15 --put

# Get the jobs from the queue but in dry mode (does not run the command but only outputs the command that would be run) 
python3 create_fuzz_script.py -c sanitycheck.config -n 15 --get --dry

Installing SieveFuzz from scratch

If instead of using the docker file you are interested in setting up sievefuzz from scratch. You can follow the below set of instructions: