This is the code for the paper Reinforcement Learning Discovers Efficient Decentralized Graph Path Search Strategies by Alexei Pisacane, Victor-Alexandru Darvariu, and Mirco Musolesi, presented at the Third Learning on Graphs Conference (LoG 2024). If you use this code, please consider citing our work:
@inproceedings{pisacane24reinforcement,
title = {Reinforcement Learning Discovers Efficient Decentralized Graph Path Search Strategies},
author = {Pisacane, Alexei and Darvariu, Victor-Alexandru and Musolesi, Mirco},
booktitle = {Proceedings of the Third Learning on Graphs Conference (LoG 2024)},
year = {2024},
volume = {269},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},
}
- Python 3.10
Clone this repository to your local machine:
git clone https://github.com/yourusername/project-name.git
cd project-nameInstall the required Python packages using Poetry, and activate the environment:
poetry install
poetry shellTo setup the real and synthetic graphs for experiments, run the following command:
bash setup.sh
The graphs are stored in folder graphs.
To train a model from {mlp,gnn} on a given graph, run the following command:
python src/train.py --model {MODEL} --graph {GRAPH} --n_episodes {N_EPISODES} --seed {SEED}To test a model on one seed, run the following command:
python src/test.py --model {MODEL} --graph {GRAPH} --seed {SEED}
To replicate the results from the paper, you must train GARDEN on 10 random seeds for each facebook graph.
GARDEN must be trained on each of the synthetic graphs generated.
MLPA2C and MLPA2CWD must be trained on each of the synthetic graphs generated with
We recommend using a cluster to train the models in parallel. GARDEN may benefit from a GPU. You may need to set up a free Weights and Biases account to track the training progress.
Once this has been completed, the results can be generated using the scripts the src folder