Secure Stable Matching at Scale
This repository provides code to accompany the paper Secure Stable Matching at Scale by Jack Doerner, David Evans, and abhi shelat, which appeared in the 23rd ACM Conference on Computer and Communications Security (CCS) and is included here. This software is, in essence, a version of the Absentminded Crypto Kit pared down until it contains only what is necessary to reproduce the results presented. As such, it takes the form of a library, along with a small number of testing and benchmarking applications. Bear in mind, further development will take place in the main ACK repository.
You must first build obliv-c, though it need not be installed in any particular location.
In order to use our python graphing scripts, you will need your distribution's
To compile ACK, set the path to obliv-c's main project directory via
export OBLIVC_PATH=<path to obliv-c>, then run
Source for this project is divided into two directories:
src contains code for the primary library, while
tests contains code for tests and benchmarks. The library will be compiled to
build/lib/liback.a, and all testing and benchmarking binaries are found in
build/tests. Benchmarking scripts can be found in
tools/bench, and they will write their output to
results/bench. Scripts for visualizing benchmark results can be found in
tools/graph, and they will write their output to
results/graph. The Secure Stable Matching at Scale paper may be found in the root directory of the project.
For the purpose of reproducing the results we report in our paper, we provide a suite of benchmark scripts in the
tools/bench directory. Each script must be executed on one machine as a server, and on another as a client. Scripts will run as server by default, and will output data and summaries to the
results/bench directory. Adding the
-c <address> flag will cause the script to run as a client and connect to the specified server; in client mode the script will print a summary on the terminal, and will not save raw data. The full list of available benchmarks is as follows:
- Secure Gale-Shapley Execution Time vs Pair Count (Figure 8) -
- Secure Gale-Shapley Gate Count vs Pair Count (Table 1) -
- Secure Roth-Peranson Parametric Benchmark Results (Figure 9)
- Proposer Count (Figure 9a) -
- Reviewer Count (Figure (9b) -
- Proposer Preference Bound (Figure 9c) -
- Reviewer Preference Bound (Figure 9d) -
- Reviewer Position Bound (Figure 9e) -
- Proposer Count (Figure 9a) -
- Secure Roth-Peranson NRMP Benchmark Results (Table 2) -
For convenience, we have included two additional scripts,
rp_all.sh, which reproduce all of the experiments represented in the paper for Gale-Shapley and Roth-Peranson respectively.
For each of these benchmarking scripts, there is a matching python script in
tools/graph that will parse the output and generate a graph identical to that found in the paper, which will be saved in
results/graph. Gale-shapley is a special case: whereas in all other cases, a single benchmark is required to generate the graph,
tools/graph/gs_time.sh requires data for both the original and improved methods, each combined with linear scan, circuit, and square-root ORAM. This entire dataset can be generated by running
Running Tests and Benchmarks Manually
Each of our benchmarking and testing programs (that is, the binaries, not the benchmarking scripts described above) have individual options for adjusting the parameters relevant to that particular test or benchmark. These can be found by running the programs with the
-h flag. In addition, there are a few standard parameters shared by all of the included programs, as well as the benchmarking scripts:
-hprints a list of program-specific flags.
-p <number>determines the port on which the program will listen (for servers) or connect (for clients). The default port is 54321.
-c <address>instructs the program to run as a client, and connect to the server at
<address>. By default, the program will run as a server.
-o <type>forces the program to use
<type>ORAMs. Valid types are
-i <number>(benchmarks only) instructs a benchmark to run for
<number>iterations, and record results for all of them.
Building on this Work
We encourage others to improve our work and to integrate it with their own applications. As such, we provide it under the 3-clause BSD license. For ease of integration, our code takes the form of a library, to which other software can link directly.