This is the first release of pFuzzer. Stay tuned for updates!
The installation guideline for pFuzzer is defined in pfuzzer/README. In order to run all experiments the installation of AFL, KLEE, gcov and gcovr is also needed.
As we do not have the rights to publish the tinyC subject, you need to download tinyC on your own: Furthermore, the tinyC subject has to be downloaded from here.
Fill the following files with the content from the link:
- afl/tinyc/tiny.c
- afl/tinyc/eval/tiny.c
- klee/tinyc/tiny.c
- klee/tinyc/eval/tiny.c
- pfuzzer/samples/tinyc/tiny.c
As many restults as possible are included in this replication package.
Furthermore, add the following dependencies to pfuzzer/modules/trace-taint/sources/dependencies/:
- commons-cli-1.3.1.jar
- gson-2.8.0.jar
- javax.json-1.1.2.jar
- javax.json-api-1.1.2.jar
- msgpack-core-0.8.12.jar
After installation the provided shell scripts can be used to run all tools.
- clean_experiments.sh: Deletes all generated data in all folders and only leaves the subjects and tools.
- run_experiments.sh <Runtime in Hours>: Runs all tools for the specified amount of time, after that the evaluation is run as well (extraction of tokens and coverage data).
- run_evals.sh: Runs only the evaluation (extraction of coverage and tokens) after all tools already ran. Can be started as often as needed after run_experiments.sh has finished.
Running pFuzzer on a specific program can also be done. pFuzzer can be called with the following command line:
python3 chains.py -p <path-to-program> -a <True/False> -f <Flag> -i <True/False>
Where the command line arguments are defined as follows:
- -p: The path to the program file which will be tested. This should be the main C-file to compile.
- -a: Defines if the input is given via command line or stdin. I.e., if True, the program under test is called like follows: ./PUT -q input, if False the input is given via stdin.
- -f: If -a is True, -f can be used to define the flag with which a program is called, e.g. if one calls chains.py with "-f q", the program under test will be called like follows: ./PUT -q input.
- -i: Defines if the program should be instrumented before running. After the program was instrumented once -i can be set to False to save the time used for instrumentation.
All results are already included in the material. The results are available in raw and processed information:
###Raw information
For each tool and each subject, a file called "valid_inputs.txt" exists, containing all inputs that let the program return with a 0 exit code or let the program hang:
- For KLEE and AFL the respective file for each subject lies in the folder afl/*/eval/ resp. klee/*/eval/
- For pFuzzer the respective file for each subject lies in pfuzzer/samples/*
Note: tinyc for pfuzzer also contains a file valid_inputs_orig.txt containing the original while loop generated by pFuzzer. The valid_inputs.txt contains a while loop which is no infinite loop such that we can obtain the coverage for the while loop.
We also added the raw output of AFL and KLEE, for pFuzzer the raw output is the valid_inputs.txt file:
- The results for AFL for each subject lie in afl/*/findings
- The results for KLEE for each subject lie in klee/*/klee-out-0
###Processed Information
We also included the information about the tokens and coverage we used for creating the graphs and tables in the paper:
- The information for AFL and KLEE is included afl/*/eval/ resp. klee/*/eval/.
- The log.txt contains the information about tokens at the end of the file (the tokens that were generated and how many tokens of each length were generated, the number of different token pairs as well as token-pairs and how often each of them was generated).
- The *.html files contain the coverage achieved by each test set.
- For pFuzzer the same results lie in the pfuzzer/*/ folder.
- The evallog.txt file contains token information.
- The *.html files the respective coverage information.