Skip to content

ChristoffelDoorman/network-rewiring-rl

Repository files navigation

Dynamic Network Reconfiguration for Entropy Maximization using Deep Reinforcement Learning

This repository contains the code and models to reproduce the results presented in the article Dynamic Network Reconfiguration for Entropy Maximization using Deep Reinforcement Learning by Christoffel Doorman, Victor-Alexandru Darvariu, Stephen Hailes and Mirco Musolesi, Proceedings of the First Learning on Graphs Conference (LoG 2022), PMLR 198. The codebase consists of two parts: the relnet folder contains the code for training the DQN algorithms and modelling the MDP, while the analyses folder contains the code to perform downstream experiments. The relnet module is heavily based on the RNet-DQN implementation.

Running experiments

The file relnet/experiment_launchers/run_rnet_dqn.py contains the configuration to run our method on synthetic graphs. You may modify objective functions, hyperparameters etc. to suit your needs. Example for how to run:

docker exec -it relnet-manager /bin/bash -c "source activate ucfadar-relnet && python relnet/experiment_launchers/run_rnet_dqn.py"

To perform a hyperparameter search, run relnet/experiment_launchers/hyperparam_search/task_execution.py with the appropriate json file containing the search space and seed list. To train a model, generate a json file containing a single hyperparameter configuration and the number of seeds you would like to train.

Reproducing the results

The analyses folder contains four folders and two python helper files. baselines.py contains functions for random and greedy rewiring on networkx generated graphs, while utils.py contains general helper functions used in various experiments.

For the following experiments, specify the correct paths to the model folder:

  • To generate the results shown in Figure 3, run analyses/graph_studies/graph_generalization.py with the appropriate arguments like --obj_name, --graph_type, etc. This will generate a .csv file for every (obj_fn, graph_type) combination. Baseline results can be generated by running analyses/baseline_experiments/run_baselines.py (--attack_scenario flag not needed).

  • For the evaluation of the attack simulation, run analyses/attack_simulation/multiple_simulations.py and specify the desired arguments. Performance of the baselines can be computed by setting the appropriate flags in analyses/baseline_experiments/rnd_bl_attack.py (include --attack_scenario to calculate random walk cost).

  • The results of DQN and baseline rewiring of the UHN graphs can be reproduced with analyses/UHN_graph/UHN_DQNs.py and analyses/UHN_graph/UHN_baselines.py respectively. Match the correct path to the post-processed graph and run the graph for the desired models and budget.

NB: some python scripts in the analyses folder use functionalities from the relnet module and therefore require Docker to be running.

Setup instructions

Adapted setup instructions are provided below.

Prerequisites

Currently supported on macOS and Linux (tested on CentOS 7.4.1708, but should work out of the box on any standard Linux distro), as well as on Windows via WSL. Makes heavy use of Docker, see e.g. here for how to install. Tested with Docker 19.03. The use of Docker largely does away with dependency and setup headaches, making it significantly easier to reproduce the reported results.

Configuration

The Docker setup uses Unix groups to control permissions. You can reuse an existing group that you are a member of, or create a new group groupadd -g GID GNAME and add your user to it usermod -a -G GNAME MYUSERNAME.

Create a file relnet.env at the root of the project (see relnet_example.env) and adjust the paths within: this is where some data generated by the container will be stored. Also specify the group ID and name created / selected above.

Add the following lines to your .bashrc, replacing /home/john/git/relnet with the path where the repository is cloned.

export RN_SOURCE_DIR='/home/john/git/relnet'
set -a
. $RN_SOURCE_DIR/relnet.env
set +a

export PATH=$PATH:$RN_SOURCE_DIR/scripts

Make the scripts executable (e.g. chmod u+x scripts/*) the first time after cloning the repository, and run apply_permissions.sh in order to create and permission the necessary directories.

Managing the container

Some scripts are provided for convenience. To build the container (note, this will take a significant amount of time e.g. 2 hours, as some packages are built from source):

update_container.sh

To start it:

manage_container.sh up

To stop it:

manage_container.sh stop

To restart the container:

restart.sh

Setting up synthetic graph data

Synthetic data will be automatically generated when the experiments are ran and stored to $RN_EXPERIMENT_DIR/stored_graphs.

Accessing the services

There are several services running on the manager node.

  • Jupyter notebook server: http://localhost:8888 (make sure to select the python-relnet kernel which has the appropriate dependencies)

The first time Jupyter is accessed it will prompt for a token to enable password configuration, it can be grabbed by running docker exec -it relnet-manager /bin/bash -c "jupyter notebook list".

Problems with jupyter kernel

In case the python-relnet kernel is not found, try reinstalling the kernel by running docker exec -it relnet-manager /bin/bash -c "source activate ucfadar-relnet; python -m ipykernel install --user --name relnet --display-name python-relnet"

License

MIT.

Citation

If you use this code, please consider citing our paper:

@article{doorman2022dynamic, 
  title={Dynamic Network Reconfiguration for Entropy Maximization using Deep Reinforcement Learning}, 
  author={Doorman, Christoffel and Darvariu, Victor-Alexandru and Hailes, Stephen and Musolesi, Mirco}, 
  journal={Learning on Graphs Conference},
  volume={PMLR 198},
  year={2022}
}

About

Code for article "Dynamic Network Reconfiguration for Entropy Maximization using Deep Reinforcement Learning" (LoG 2022)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published