Cache Side-Channel Leakage Tester
Winner of the BEST PAPER AWARD at 10th IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), 2017
Cache timing attacks retrieve secret information (e.g. a secret key) about a program by analyzing the cache behaviour in program executions. It is, therefore, crucial to understand whether a program is vulnerable to cache timing attacks. But how can we test a program to discover its vulnerability against cache timing attacks? In this repository, we present a test generation methodology that systematically discovers the cache side-channel leakage of arbitrary software binaries. At the core of our test generation is a method that systematically search the program input space and it adapts based on the observed cache performance in the executed tests. We have implemented our test generator and evaluated it with several open-source subject programs, including programs from OpenSSL and Linux GDK libraries. Our evaluation effectively reveals cache side-channel leakage in such real-world programs. We also empirically show that our test generator is more effective in revealing cache side-channel leakage than traditional fuzz testing tools Radamsa and AFL.
To build the test generator, please run the following command in the root directory: Execute
make clean all
This generates two executable binaries: cache_sc_tester_s and cache_sc_tester_p
cache_sc_tester_s executes the target serially, i.e., one by one. Ideally, this should be used when parallel executions of a target will affect each other's output.
cache_sc_tester_p executes the target parallely. Ideally, this should be used when parallel executions of a target will not affect each other's output.
To run tests on the benchmarks we have evaluated, one needs to build the provided wrappers first. These wrappers execute instrumented code for the relevant platform (simplescalar simulator or local native hardware). Please run the following commands to build these wrappers:
cd ./wrappers/cache_misses_local make all cd ../cache_misses_simulator make all cd ../csv-exec make all cd ../..
Please note that for all the commands mentioned above, the working directory of shell should be this tool's folder. For example, if one extracts the contents of this repository to 'cache-side-channel-tester' folder, then the working directry of the shell prompt is '/path/to/cache-side-channel-tester'.
To run the test generator, the following input parameters are required:
A configuration file (for example, config-sim.txt and config-real.txt)
The name of the output log file
$ ./cache_sc_tester_p config_sim.txt evaluation.log
On execution, the output files are generated in 'data' folder.
Running a Small Test
A script named test.sh has been provided in this directory. This script runs two tests on a basic AES implementation: one on the simplscalar based simluator, and another one on the local machine. The results of the tests are stored in the folders basic_aes_sim_results and basic_aes_real_results respectively.
This script also evaluates the results against inputs generates by AFL and Radamsa for the same program. The comparison plots are saved as basic_aes_sim_results.png and basic_aes_local_results.png respectively.
Please note that this script does not generate fresh test inputs from AFL and Radamsa. It just uses some inputs which had been generated before, stored in the folder test, in files afl-aes.csv and radamsa-aes.csv.
To generate fresh test inputs from AFL, please perform the following steps:
Download AFL and extract it to the folder named 'AFL', which is present in this directory.
Build AFL by running
Run the following command from AFL directory:
This will start AFL's execution on the source file AFL/test_sources/aes.c.
Press Ctrl+C to stop AFL when around 600000 tests are generated. This terminates AFL, and the tests generated by AFL are saved in a file named afl-aes.csv.
Copy afl-aes.csv to the folder named test, present in the root location. Overwrite if prompted.
Now test.sh will use the tests which AFL just generated.
To generate fresh test inputs from Radamsa, please perform the following steps:
Download and extract Radamsa (in any directory).
Build Radamsa by running
Run the following command from AFL Radamsa
echo "1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16" | bin/radamsa -n 600000 > radamsa-aes.csv
After this command's execution, the tests generated by Radamsa are saved in a file named radamsa-aes.csv.
Copy radamsa-aes.csv to the folder named test, present in the root location of this software. Overwrite if prompted.
Now test.sh will use the tests which Radamsa just generated.
Recreating the experiment results
Please run the script recreate.sh to recreate the experiments we conducted.
Again, for AFL and Radamsa, previously created inputs are used. For creating fresh inputs using AFL, please follow the directions previously given. Different scripts have been provided in AFL folder to run AFL on different benchmark folders. For example, fuzz-openssl_aes.sh runs AFL on OpenSSL's AES implementation, and creates a corresponding file containing all the inputs which AFL generated.
To generate freash inputs with Radamsa, plesae run the following commands:
echo "1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16" | bin/radamsa -n 600000 > radamsa-aes.csv echo "1" | bin/radamsa -n 600000 > radamsa-gdklib_name.csv echo "1" | bin/radamsa -n 600000 > radamsa-gdklib_uni.csv echo "1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16" | bin/radamsa -n 600000 > radamsa-openssl_aes.csv echo "1,2,3,4,5,6,7,8" | bin/radamsa -n 600000 > radamsa-openssl_des.csv echo "1,2,3,4,5,6,7,8,9,10" | bin/radamsa -n 600000 > radamsa-openssl_rc4.csv
If you have generated fresh inputs from AFL and Radamsa and you want to use them, please replace the corresponding csv files present in the folders wrappers/csv-exec/afl and wrappers/csv-exec/radamsa with the ones you created, before running the recreate.sh script.
Please note that recreating the experiments can take a very long time. Please feel free to tweak the config files, provided in the folder named 'config_files', to shorten the execution duration for their respective tests.