Simulations for ClaimChain-based decentralized key distribution
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
data
notebooks
scripts
settings
simulations
tests
.gitignore
.travis.yml
Dockerfile
LICENSE.txt
Makefile
README.md
apt.txt
pytest.ini
requirements.txt

README.md

ClaimChain simulations

Build Status Binder

This repo contains simulations of in-band public key distribution for messaging using ClaimChains. See the main web page to learn about the ClaimChain data structure.

Quickstart with Binder

You can launch and run the notebooks for exploring and visualizing the simulations online using Binder without the need to install anything locally.

Local quickstart

On a Debian-based system, you can set up the code and launch the notebooks in three steps:

  1. Install system and Python dependencies:
make deps && make venv
  1. Download the pre-computed simulation reports and the processed dataset:
make data
  1. Run the notebooks:
venv/bin/jupyter notebook notebooks

The last command will open browser window with Jupyter running.

Details

Installation

You will need Python 3 and the Python header files installed. On Debian-based systems you can achieve this with:

apt-get install python3 python3-dev python3-pip

Some of the dependencies require more system packages:

apt-get install wget git build-essential libssl-dev libffi-dev python3-matplotlib parallel

You probably also want venv to isolate your development environment:

apt-get install python3-venv
python3 -m venv venv
source venv/bin/activate

If you use virtualenv you need to repeat the last command every time you want to work in the virtual environment.

Now you can install the requirements:

pip install -r requirements.txt

All of these can also be done by running make deps && make venv.

Producing the data

Getting pre-computed data files

You can either use the simulation reports and pre-processed Enron dataset files that we have produced, or you reproduce them yourself. You can download our data package from Zenodo (see the data folder), or by running make data.

Running simulations and parsing the dataset on your own

Download and process the dataset

The simulations use the Enron dataset as the test load. Run make enron from the project root to download and process the dataset to the data/enron/parsed directory.

Run the simulations

To run the simulations from the paper, run make reports. Mind that they can use up to 50 GB of RAM, and take upwards of 25 hours on an Intel Xeon E5 machine. The simulations generate reports containing various useful information, and are saved to the data/reports directory.

Opening the notebooks

We use Jupyter nodebooks to compute statistics and show the plots. You can start Jupyter with jupyter notebook. This will open a browser window, where you can select a notebook from the notebooks directory and run it. The notebooks will save all produced plots to the images directory.

Acknowledgements

This work is funded by the NEXTLEAP project within the European Union’s Horizon 2020 Framework Programme for Research and Innovation (H2020-ICT-2015, ICT-10-2015) under grant agreement 688722.